back to list

Latest code and profiling

🔗Graham Breed <gbreed@gmail.com>

1/18/2006 1:22:36 PM

If I remember I'll copy my main source code in Python, Pyrex, C and Ocaml to http://x31eq.com/temper/regular.zip

The Pyrex is new, and the fastest.

To give you an idea of how fast each part of the search is, here's some profile output:

temper python regular_profile.py big | tail -n 23 | head -n 21
ncalls tottime cumtime filename:lineno(function)
...
27 82.340 396.800 regular.py:309(getLinearTemperaments)
1210950 119.250 160.490 regular.py:362(__init__)
611514 22.480 68.970 regular.py:417(optimalRMSError)
1210950 51.590 51.590 regular.py:404(complexity)
611514 46.490 46.490 regular.py:428(magicSums)
1210950 41.240 41.240 regular.py:589(extendedEuclid)
27 0.860 34.030 regular.py:106(getEqualTemperaments)
231270/13947 13.160 32.150 regular.py:189(limitedSupersets)
390398 12.570 25.350 regular.py:549(add)
217323 5.240 14.650 regular.py:179(addEntry)
393080 12.860 12.860 regular.py:392(invariant)
231270 7.170 9.940 regular.py:168(__init__)
390398 4.870 4.870 regular.py:303(defaultBadness)
231270 3.330 3.330 regular.py:230(value)
27 2.900 2.900 regular.py:556(retrieve)
231270 2.770 2.770 regular.py:224(setData)

This is for a searches over 300 equal temperaments. The time taken to find the equal temperaments is only 34 seconds out of the total 396 seconds run time. With simpler calculations it's more significant. It would be a lot higher if I used the old algorithm of rounding every prime up and down, and filtering the results by their error. That produces 2**(n-1) mappings for n primes and gets out of hand as n gets large.

The largest part of the rest of the time is spent initializing the rank 2 temperaments. That includes setting the period and generator. 41 out of 160 seconds are in extendedEuclid: used to get the generators from the numbers of notes to the octave. It doesn't take any longer for higher primes, except that the numbers tend to be larger.

The next largest amount of time is spent calculating the error. It doesn't take much longer than the complexity, but the complexity threshold's first and so the error is calculated less often. Interestingly, the pure Python error optimization is faster than one using Numeric Python, although the latter used native code routines. The "magicSums" function is part of the error calculation.

The "add" method is for adding a temperament to the set for eventally sorting by badness. It includes the overhead for calculating the hash code, looking it up, and so on. About half the time is spent calculating the invariant -- here the octave-equivalent mapping without the 0 at the start and multiplied by the number of periods to an octave.

Originally, it spent a lot of time checking that the temperament didn't have contorsion. So I moved that to the end and got it to only check temperaments that it would otherwise have returned. Because it only returns a small fraction of the temperaments it looks at, you don't even see the contorsion in this list. With smaller searches it is still there.

That covers all the parts that take a significant amount of time. Here, for comparison, is some code that uses wedgies instead of the period and generator.

ncalls tottime cumtime filename:lineno(function)
...
27 73.750 468.040 regular_wedgie.py:309(getLinearTemperaments)
1210950 92.120 247.920 regular_wedgie.py:362(__init__)
1210950 155.800 155.800 regular_wedgie.py:673(wedgie)
611514 23.020 81.910 regular_wedgie.py:390(optimalRMSError)
611514 58.890 58.890 regular_wedgie.py:400(magicSums)
1210950 47.200 47.200 regular_wedgie.py:383(complexity)
27 0.920 33.710 regular_wedgie.py:105(getEqualTemperaments)
231270/13947 12.760 31.770 regular_wedgie.py:189(limitedSupersets)
217323 5.200 14.650 regular_wedgie.py:179(addEntry)
231270 6.960 9.920 regular_wedgie.py:167(__init__)
390398 9.150 9.150 regular_wedgie.py:505(add)
390398 5.020 5.020 regular_wedgie.py:303(defaultBadness)
231270 3.380 3.380 regular_wedgie.py:230(value)
231270 2.960 2.960 regular_wedgie.py:224(setData)
27 2.840 2.840 regular_wedgie.py:512(retrieve)
70297 0.980 0.980 regular_wedgie.py:702(nint)

This is old code from before I used general wedgies and Numeric Python to get higher ranks to work. It takes about a minute of profiler-seconds longer than the other version. More than two minutes are spent calculating wedgies. Finding one wedgie is much slower than the weighted, prime RMS error. Eventually I changed the code so that it only calculates the wedgie after it's decided the error is okay. That's especially important with the new code where the wedgie calculations takek much longer.

Graham