back to list

Paper: Multiple F0 Estimation In The Transform Domain

🔗Mike Battaglia <battaglia01@gmail.com>

2/28/2011 7:13:41 PM

Gee, I could have sworn I'd thought of this idea before at some point... :P

By my former professor, Corey Cheng (worked with Dolby on the
development of AAC, I believe), and my former colleague Chris Santoro:

http://ismir2009.ismir.net/proceedings/PS1-19.pdf

I think I'm going to ditch all of this HE convolution nonsense and
just start working on this.

-Mike

🔗genewardsmith <genewardsmith@sbcglobal.net>

3/1/2011 12:22:37 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> I think I'm going to ditch all of this HE convolution nonsense and
> just start working on this.

How is this a replacement for HE convolution nonsense?

🔗Mike Battaglia <battaglia01@gmail.com>

3/1/2011 12:24:27 AM

On Tue, Mar 1, 2011 at 3:22 AM, genewardsmith
<genewardsmith@sbcglobal.net> wrote:
>
> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> > I think I'm going to ditch all of this HE convolution nonsense and
> > just start working on this.
>
> How is this a replacement for HE convolution nonsense?

I'm not sure that this paper is exactly what I was going for, but it's
a step in the right direction in consolidating the ideas I'd had a
while ago. The basic idea: which will give you more F0's, a major
chord or a minor chord?

I sadly admit that that is very much "oversimplifying," but I think
you'll get the gist of it...

-Mike

🔗Mike Battaglia <battaglia01@gmail.com>

3/1/2011 12:51:56 AM

I guess to elaborate a little further, my goal a while ago was to take
the signal and run it through a filterbank of comb filters, and see
what comes out. Each comb filter corresponds to a fundamental, and I
predict that a 4:5:6 will activate less comb filters than a 10:12:15.
The whole thing will simply generalize HE, which places an incoming
"dyad" as a combination of basis "dyads." This will place an incoming
"signal" as a combination of basis "fundamentals."

Well, that's also an oversimplification. I guess to elaborate a little
further, instead of using actual comb filters, I'm going to use comb
filters that have a 1/N^2 rolloff, because according to Carl, they
approximate closely the spectral rolloff of the human voice if
formants are removed (I've never double checked this yet, so I hope
you're right, Carl)

Well, that's also an oversimplification. I guess to elaborate a little
further, instead of using actual damped comb filters, I'm going to use
damped comb filters that have frequency spreading, such that each peak
is replaced by a tapered "domain," (aka I'm reducing the q-factor, if
you're hip to that term). This is to model that a detuned 5/4 can
still activate the 5th and 4th harmonics of some basis filter.

Well, that's also an oversimplification. We don't actually want to use
filters with rolloff here, because if we're assuming the ideal timbre
has all harmonics present with a 1/N^2 rolloff, and we then filter it
with a filter with all harmonics present with a 1/N^2 rolloff, we end
up attenuating the upper harmonics even further and it's just bad
news. We have to use filters with the inverse response to make it all
work out, I think.

Well, that's also an oversimplification. I guess to elaborate a little
further, instead of even worrying about the whole filterbank approach,
what I'm really going to do is instead ro create an integral
transform, analogous to the Fourier transform, that does all of the
above in a hopefully elegant mathematical expression and much faster.

Well, that's also an oversimplification. I guess to elaborate a little
further, some guy named Pierre-Simon Laplace basically did most of
this centuries ago with his Laplace transform, so I'm basically just
going to develop a variant of the Laplace transform that uses a
harmonic series of damped sinusoids instead of just damped sinusoids
(damped sinusoids are the basis functions for the Laplace transform).

Well, that's also an oversimplification. I guess to elaborate a little
further, since we're going to do this all digitally, I'm going to have
to use the digital version of the Laplace transform, which is called
the Z-transform. I'm not sure how spectral sampling error is going to
work (is discretization even a word?), but there's probably some way
to work around it.

Well, that's also an oversimplification. I guess to elaborate a little
further, there's no point modelling the full Z-transform if we assume
that each harmonic is uniformly sensitive to mistuning in -linear-
frequency space (meaning that higher harmonics are MORE sensitive to
mistuning in logarithmic space, insofar as actually getting them to
refer to a fundamental is concerned). So we basically just run an FFT
on the whole signal, but first multiply the signal by e^(-qt), where q
is the q-factor in the frequency domain, because the math works itself
out if you work the integral out.

I'll stop because there are further refinements that need to be made
that I haven't worked out yet. Once you've done one of the above, the
idea is, you end up with a "periodicity spectrum" that is somewhat
analogous to the Fourier frequency spectrum. Now that you have all of
these F0's, a thought experiment tells us that a chord like
4:5:6:7:8:9:10 will end up hitting less comb filters than
1/(4:5:6:7:8:9:10).

Each F0 will then become a basis vector for the original signal. You
can then assume that the brain has some algorithm for picking out the
"real" vectors and discarding the noise. For this model, which is no
doubt vastly oversimplified (but less so than HE), we're going to
assume that the brain has a more difficult time doing this, however it
does it, when a signal weakly activates a lot of F0's (e.g. is more
discordant) vs strongly activates only a few (e.g. is more
concordant). So now that we've developed this approach to coding the
signal (what should it be called, the periodicity domain?) we can now
apply the usual information theory definition of entropy here

So for something like 4:5:6, the strongest options for F0 will be
something like 4, 5, 6, 2, and 1. You'll also see weak undertones of
each of those. Note that 4, 5, and 6 appear as their own fundamental;
the brain is a bit confused and isn't sure if this chord is made up of
three separate notes, or one single note, etc. This is pretty much
what justly-tuned major chords sound like to me, my brain flip flops
between placing them as three single notes with a bit of fusion and
one huge fused timbre with some harmonics poking out (the former more
so). Detuning 5 as in 12-tet should decrease the strength of the 1
fundamental and make 2 dominate a bit more, and detuning 3 as in
19-tet should decrease the strength of the 2 fundamental and make 1
dominate a bit more.

This should work in real time, on actual signals, for any timbre, for
a recording, for anything. There are a few hurdles in working it out
rigorously and that's where I'm at now.

Phew!

-Mike

On Tue, Mar 1, 2011 at 3:24 AM, Mike Battaglia <battaglia01@gmail.com> wrote:
> On Tue, Mar 1, 2011 at 3:22 AM, genewardsmith
> <genewardsmith@sbcglobal.net> wrote:
>>
>> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>>
>> > I think I'm going to ditch all of this HE convolution nonsense and
>> > just start working on this.
>>
>> How is this a replacement for HE convolution nonsense?
>
> I'm not sure that this paper is exactly what I was going for, but it's
> a step in the right direction in consolidating the ideas I'd had a
> while ago. The basic idea: which will give you more F0's, a major
> chord or a minor chord?
>
> I sadly admit that that is very much "oversimplifying," but I think
> you'll get the gist of it...
>
> -Mike
>

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

3/2/2011 7:19:30 AM

Isn't autocorrelogram such a periodicity spectrum?

Kalle

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:

> I'll stop because there are further refinements that need to be made
> that I haven't worked out yet. Once you've done one of the above, the
> idea is, you end up with a "periodicity spectrum" that is somewhat
> analogous to the Fourier frequency spectrum. Now that you have all of
> these F0's, a thought experiment tells us that a chord like
> 4:5:6:7:8:9:10 will end up hitting less comb filters than
> 1/(4:5:6:7:8:9:10).

🔗Mike Battaglia <battaglia01@gmail.com>

3/2/2011 7:51:04 AM

On Wed, Mar 2, 2011 at 10:19 AM, Kalle Aho <kalleaho@mappi.helsinki.fi> wrote:
>
> Isn't autocorrelogram such a periodicity spectrum?
>
> Kalle

Depends on how it's calculated. Autocorrelation is a linear function
and will never synthesize f0 from a timbre containing only f1, f2, and
f3. If you perform some kind of nonlinear calculation on it, like a
normalized square difference function, then yes, you can. But more
importantly, if your autocorrelogram is just the signal delayed by T
ms in time and then cross-correlated with itself, that's going to do
crazy stupid stuff to the spectrum and will only be a very rough
version of what I've laid out - the impulse response will be something
like |__|_______, which corresponds to a feedforward comb filter (or a
"comb notch" filter).

You'll have to reference me to a specific type of autocorrelogram so I
can comment how different models address these points.

-Mike

🔗Kalle Aho <kalleaho@mappi.helsinki.fi>

3/2/2011 8:48:39 AM

Perhaps something like the thing in pdfs under "Auditory neurophysiology" here:

http://www.cariani.com/

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> On Wed, Mar 2, 2011 at 10:19 AM, Kalle Aho <kalleaho@...> wrote:
> >
> > Isn't autocorrelogram such a periodicity spectrum?
> >
> > Kalle
>
> Depends on how it's calculated. Autocorrelation is a linear function
> and will never synthesize f0 from a timbre containing only f1, f2, and
> f3. If you perform some kind of nonlinear calculation on it, like a
> normalized square difference function, then yes, you can. But more
> importantly, if your autocorrelogram is just the signal delayed by T
> ms in time and then cross-correlated with itself, that's going to do
> crazy stupid stuff to the spectrum and will only be a very rough
> version of what I've laid out - the impulse response will be something
> like |__|_______, which corresponds to a feedforward comb filter (or a
> "comb notch" filter).
>
> You'll have to reference me to a specific type of autocorrelogram so I
> can comment how different models address these points.
>
> -Mike
>

🔗Paul <phjelmstad@msn.com>

3/2/2011 10:00:38 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> I guess to elaborate a little further, my goal a while ago was to take
> the signal and run it through a filterbank of comb filters, and see
> what comes out. Each comb filter corresponds to a fundamental, and I
> predict that a 4:5:6 will activate less comb filters than a 10:12:15.
> The whole thing will simply generalize HE, which places an incoming
> "dyad" as a combination of basis "dyads." This will place an incoming
> "signal" as a combination of basis "fundamentals."
>
> Well, that's also an oversimplification. I guess to elaborate a little
> further, instead of using actual comb filters, I'm going to use comb
> filters that have a 1/N^2 rolloff, because according to Carl, they
> approximate closely the spectral rolloff of the human voice if
> formants are removed (I've never double checked this yet, so I hope
> you're right, Carl)
>
> Well, that's also an oversimplification. I guess to elaborate a little
> further, instead of using actual damped comb filters, I'm going to use
> damped comb filters that have frequency spreading, such that each peak
> is replaced by a tapered "domain," (aka I'm reducing the q-factor, if
> you're hip to that term). This is to model that a detuned 5/4 can
> still activate the 5th and 4th harmonics of some basis filter.
>
> Well, that's also an oversimplification. We don't actually want to use
> filters with rolloff here, because if we're assuming the ideal timbre
> has all harmonics present with a 1/N^2 rolloff, and we then filter it
> with a filter with all harmonics present with a 1/N^2 rolloff, we end
> up attenuating the upper harmonics even further and it's just bad
> news. We have to use filters with the inverse response to make it all
> work out, I think.
>
> Well, that's also an oversimplification. I guess to elaborate a little
> further, instead of even worrying about the whole filterbank approach,
> what I'm really going to do is instead ro create an integral
> transform, analogous to the Fourier transform, that does all of the
> above in a hopefully elegant mathematical expression and much faster.
>
> Well, that's also an oversimplification. I guess to elaborate a little
> further, some guy named Pierre-Simon Laplace basically did most of
> this centuries ago with his Laplace transform, so I'm basically just
> going to develop a variant of the Laplace transform that uses a
> harmonic series of damped sinusoids instead of just damped sinusoids
> (damped sinusoids are the basis functions for the Laplace transform).
>
> Well, that's also an oversimplification. I guess to elaborate a little
> further, since we're going to do this all digitally, I'm going to have
> to use the digital version of the Laplace transform, which is called
> the Z-transform. I'm not sure how spectral sampling error is going to
> work (is discretization even a word?), but there's probably some way
> to work around it.
>
> Well, that's also an oversimplification. I guess to elaborate a little
> further, there's no point modelling the full Z-transform if we assume
> that each harmonic is uniformly sensitive to mistuning in -linear-
> frequency space (meaning that higher harmonics are MORE sensitive to
> mistuning in logarithmic space, insofar as actually getting them to
> refer to a fundamental is concerned). So we basically just run an FFT
> on the whole signal, but first multiply the signal by e^(-qt), where q
> is the q-factor in the frequency domain, because the math works itself
> out if you work the integral out.
>
> I'll stop because there are further refinements that need to be made
> that I haven't worked out yet. Once you've done one of the above, the
> idea is, you end up with a "periodicity spectrum" that is somewhat
> analogous to the Fourier frequency spectrum. Now that you have all of
> these F0's, a thought experiment tells us that a chord like
> 4:5:6:7:8:9:10 will end up hitting less comb filters than
> 1/(4:5:6:7:8:9:10).
>
> Each F0 will then become a basis vector for the original signal. You
> can then assume that the brain has some algorithm for picking out the
> "real" vectors and discarding the noise. For this model, which is no
> doubt vastly oversimplified (but less so than HE), we're going to
> assume that the brain has a more difficult time doing this, however it
> does it, when a signal weakly activates a lot of F0's (e.g. is more
> discordant) vs strongly activates only a few (e.g. is more
> concordant). So now that we've developed this approach to coding the
> signal (what should it be called, the periodicity domain?) we can now
> apply the usual information theory definition of entropy here
>
> So for something like 4:5:6, the strongest options for F0 will be
> something like 4, 5, 6, 2, and 1. You'll also see weak undertones of
> each of those. Note that 4, 5, and 6 appear as their own fundamental;
> the brain is a bit confused and isn't sure if this chord is made up of
> three separate notes, or one single note, etc. This is pretty much
> what justly-tuned major chords sound like to me, my brain flip flops
> between placing them as three single notes with a bit of fusion and
> one huge fused timbre with some harmonics poking out (the former more
> so). Detuning 5 as in 12-tet should decrease the strength of the 1
> fundamental and make 2 dominate a bit more, and detuning 3 as in
> 19-tet should decrease the strength of the 2 fundamental and make 1
> dominate a bit more.
>
> This should work in real time, on actual signals, for any timbre, for
> a recording, for anything. There are a few hurdles in working it out
> rigorously and that's where I'm at now.
>
> Phew!
>
> -Mike
>
>
>
> On Tue, Mar 1, 2011 at 3:24 AM, Mike Battaglia <battaglia01@...> wrote:
> > On Tue, Mar 1, 2011 at 3:22 AM, genewardsmith
> > <genewardsmith@...> wrote:
> >>
> >> --- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@> wrote:
> >>
> >> > I think I'm going to ditch all of this HE convolution nonsense and
> >> > just start working on this.
> >>
> >> How is this a replacement for HE convolution nonsense?
> >
> > I'm not sure that this paper is exactly what I was going for, but it's
> > a step in the right direction in consolidating the ideas I'd had a
> > while ago. The basic idea: which will give you more F0's, a major
> > chord or a minor chord?
> >
> > I sadly admit that that is very much "oversimplifying," but I think
> > you'll get the gist of it...
> >
> > -Mike
> >

My brother is a neurophysiologist, have you studied much psychoacoustics? I've looked at it some...

🔗Mike Battaglia <battaglia01@gmail.com>

3/2/2011 11:56:47 AM

On Wed, Mar 2, 2011 at 1:00 PM, Paul <phjelmstad@msn.com> wrote:
>
> My brother is a neurophysiologist, have you studied much psychoacoustics? I've looked at it some...

That's what a lot of my formal background of study is in, that and
signal processing. If your brother has any good info on how the brain
processes periodicity information, let me know!

-Mike

🔗Carl Lumma <carl@lumma.org>

3/2/2011 12:12:02 PM

Mike wrote:
>> My brother is a neurophysiologist, have you studied much
>>psychoacoustics? I've looked at it some...
>
>That's what a lot of my formal background of study is in,

Really! I wasn't aware you had one iota of psychoacoustics
or neurophysiology training.

-Carl

🔗Mike Battaglia <battaglia01@gmail.com>

3/2/2011 12:19:28 PM

On Wed, Mar 2, 2011 at 3:12 PM, Carl Lumma <carl@lumma.org> wrote:
>
> Mike wrote:
> >> My brother is a neurophysiologist, have you studied much
> >>psychoacoustics? I've looked at it some...
> >
> >That's what a lot of my formal background of study is in,
>
> Really! I wasn't aware you had one iota of psychoacoustics
> or neurophysiology training.

He asked if I'd studied psychoacoustics, and I said a lot of my formal
background of study was in psychoacoustics.

-Mike

🔗Carl Lumma <carl@lumma.org>

3/2/2011 12:48:53 PM

At 12:19 PM 3/2/2011, you wrote:

>He asked if I'd studied psychoacoustics, and I said a lot of my formal
>background of study was in psychoacoustics.

Really? How come you've never mentioned it before? -Carl

🔗Mike Battaglia <battaglia01@gmail.com>

3/2/2011 12:50:09 PM

On Wed, Mar 2, 2011 at 3:48 PM, Carl Lumma <carl@lumma.org> wrote:
>
> At 12:19 PM 3/2/2011, you wrote:
>
> >He asked if I'd studied psychoacoustics, and I said a lot of my formal
> >background of study was in psychoacoustics.
>
> Really? How come you've never mentioned it before? -Carl

Uh, is this a joke? How many times have I mentioned my major and what
it is that I've studied?

-Mike

🔗Carl Lumma <carl@lumma.org>

3/2/2011 1:57:30 PM

Mike wrote:

>Uh, is this a joke? How many times have I mentioned my major and what
>it is that I've studied?

You've mentioned you took signal processing courses.
What was your major? -Carl

🔗Mike Battaglia <battaglia01@gmail.com>

3/2/2011 2:33:29 PM

On Wed, Mar 2, 2011 at 4:57 PM, Carl Lumma <carl@lumma.org> wrote:
>
> Mike wrote:
>
> >Uh, is this a joke? How many times have I mentioned my major and what
> >it is that I've studied?
>
> You've mentioned you took signal processing courses.
> What was your major? -Carl

My major was in Music Engineering Technology at the University of
Miami. I took dual minors in electrical and computer engineering. It
was an interdisciplinary major that was roughly a mix of the
following:

- Digital audio, DSP, audio electronics, acoustics, psychoacoustics, etc
- Computer and electrical engineering
- Music performance
- Recording and live sound

Of the four, I downplayed the last one and took extra coursework for
the first three. The last one is getting slowly phased out as time
goes on and the first one is getting focused on more and more. I took
extra DSP courses and took on an extra DSP senior project where a
biomedical engineering student and I built an EMG-controlled MIDI
controller, which got me the extra minor in EE. I took some extra
computer courses in various things, taught C++ for a semester and
worked as a C++ grader for two years, did some other stuff. I tried to
make myself an unofficial performance double major, so I took extra
classical piano lessons for a few semesters, studied with the Dean for
a semester, kept taking piano lessons my senior year despite it not
being a requirement for MUEs, did a senior recital that I didn't have
to do, ran extra ensembles and made myself available as an accompanist
for other senior recitals, etc. So I had little free time, and of what
free time I did have an embarrassing amount of it was spent on the
tuning list. I developed a reputation for being an eccentric nutjob
rain man shell of my former self during my senior year by always being
too busy to hang out with any of my friends and talking mostly about
overtones.

As far as my specific psychoacoustics background is concerned, we
studied everything from the basics of critical band effects and the
missing fundamental phenomenon to HRTF's, spatialization techniques,
etc. We covered MP3 and AAC and learned about the psychoacoustic
models behind each of them, which I assure you are far more
complicated than just critical band masking. We covered random stuff
from stereo perception to speech production and phoneme recognition to
the psychoacoustic techniques behind different reverb algorithms. We
covered all of the basic stuff dealing with loudness curves, and
A-weighting, and the nonlinear correlation between loudness and
amplitude, and so on. We studied information theory for a little bit
as part of understanding exactly how MP3 works, and that's why I
thought HE was such a neat idea when I first heard about it. There's a
lot of stuff I'm forgetting, but pretty much all of it was covered in
some capacity, with the neurological side of it being the weakest
part. My continued involvement on this list led me to try and delve a
little further into everything than the basic coursework covered.

Here's a wiki from my senior year DSP class which gives a pretty good
representative sample of the stuff I studied, except this is after
we'd already learned a lot of the psychoacoustics and no performance
is involved here at all:
http://505606.pbworks.com/w/page/988719/ArticleLinks

Are you happy now?

-Mike

🔗Carl Lumma <carl@lumma.org>

3/2/2011 2:41:39 PM

>Here's a wiki from my senior year DSP class which gives a pretty good
>representative sample of the stuff I studied, except this is after
>we'd already learned a lot of the psychoacoustics and no performance
>is involved here at all:
> http://505606.pbworks.com/w/page/988719/ArticleLinks
>
>Are you happy now?

Yes, cool course. Thanks for sharing. -C.

🔗Paul <phjelmstad@msn.com>

3/4/2011 9:41:14 AM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> On Wed, Mar 2, 2011 at 1:00 PM, Paul <phjelmstad@...> wrote:
> >
> > My brother is a neurophysiologist, have you studied much psychoacoustics? I've looked at it some...
>
> That's what a lot of my formal background of study is in, that and
> signal processing. If your brother has any good info on how the brain
> processes periodicity information, let me know!
>
> -Mike
>
Interesting. My dad and my brother also have BSEE degrees. He makes
use of electrical theory in the brain, of course! I know some of this
but I prefer to stay theoretical (whatever that means). What do you mean by periodicity? Like the theory of Periodic Sequences, Fourier Analysis, that sort of thing? Looks like you have combined math,
music and technology, that's pretty cool. Unfortunately I think he is too busy to look at this now but I could certainly ask specific questions for you if you want.

PGH

🔗Mike Battaglia <battaglia01@gmail.com>

3/4/2011 9:54:37 AM

On Fri, Mar 4, 2011 at 12:41 PM, Paul <phjelmstad@msn.com> wrote:
>
> Interesting. My dad and my brother also have BSEE degrees. He makes
> use of electrical theory in the brain, of course! I know some of this
> but I prefer to stay theoretical (whatever that means). What do you mean by periodicity? Like the theory of Periodic Sequences, Fourier Analysis, that sort of thing? Looks like you have combined math,
> music and technology, that's pretty cool. Unfortunately I think he is too busy to look at this now but I could certainly ask specific questions for you if you want.
>
> PGH

Er, in what sentence in that whole thing did I use periodicity in a
way that confused you? I meant in a Fourier-Analysis related kind of
way.

Can you ask him what the bleeding edge in research is on the brain's
perception of complex tones? This is related to the perception of
timbre and hence related to the perception of chords.

The more accurate of a picture I can get of exactly what the brain is
doing as a system, the better I'll be able to represent this as a
signal processing algorithm. The problem is that I think that for a
model like this to accurately predict something like consonance and
dissonance, it'll have to be pretty accurate (or there will have to be
enough free parameters in the model for us to tweak it to see what
works). So the stuff with correlograms, for example, while being a
step in the right direction (and an obvious and intuitive step at
that), I don't think will be accurate enough to form the basis for a
consonance and dissonance model. Luckily the correlogram stuff is like
what, 50 years old? So I'm sure a lot's been done since then.

Also, if you could ask him if there are any general techniques that
people use to model heavily nonlinear systems in the brain as simpler
ones (the use of simplified Volterra series, or various kinds of other
tricks I might not be aware of), to please let me know!

-Mike

🔗Paul <phjelmstad@msn.com>

3/4/2011 10:06:34 PM

--- In tuning-math@yahoogroups.com, Mike Battaglia <battaglia01@...> wrote:
>
> On Fri, Mar 4, 2011 at 12:41 PM, Paul <phjelmstad@...> wrote:
> >
> > Interesting. My dad and my brother also have BSEE degrees. He makes
> > use of electrical theory in the brain, of course! I know some of this
> > but I prefer to stay theoretical (whatever that means). What do you mean by periodicity? Like the theory of Periodic Sequences, Fourier Analysis, that sort of thing? Looks like you have combined math,
> > music and technology, that's pretty cool. Unfortunately I think he is too busy to look at this now but I could certainly ask specific questions for you if you want.
> >
> > PGH
>
> Er, in what sentence in that whole thing did I use periodicity in a
> way that confused you? I meant in a Fourier-Analysis related kind of
> way.
>
> Can you ask him what the bleeding edge in research is on the brain's
> perception of complex tones? This is related to the perception of
> timbre and hence related to the perception of chords.
>
> The more accurate of a picture I can get of exactly what the brain is
> doing as a system, the better I'll be able to represent this as a
> signal processing algorithm. The problem is that I think that for a
> model like this to accurately predict something like consonance and
> dissonance, it'll have to be pretty accurate (or there will have to be
> enough free parameters in the model for us to tweak it to see what
> works). So the stuff with correlograms, for example, while being a
> step in the right direction (and an obvious and intuitive step at
> that), I don't think will be accurate enough to form the basis for a
> consonance and dissonance model. Luckily the correlogram stuff is like
> what, 50 years old? So I'm sure a lot's been done since then.
>
> Also, if you could ask him if there are any general techniques that
> people use to model heavily nonlinear systems in the brain as simpler
> ones (the use of simplified Volterra series, or various kinds of other
> tricks I might not be aware of), to please let me know!
>
> -Mike

Okay. I've been studying some sites also that bring Group Theory into signal processing, very interesting, but let's start with your ideas....btw, I was a Piano and Math BM and BA

Paul
>