back to list

Lack of Rationale

🔗touchedchuckk <BadMuthaHubbard@...>

9/9/2009 12:44:02 PM

I posted a new version of Rationale recently, with lots of fixes. It works great here (on Linux).
I see no one has downloaded it, and there haven't been too many views of the tutorial vid I posted here either. Has anyone had trouble getting Rationale working? Is the interface not useable? I'm guessing the absence of MIDI is a problem for most, no? Perhaps not having support for non-just tunings?
Here's Rationale:
http://rationale.sourceforge.net/
and the tutorial vid:
http://www.youtube.com/watch?v=UEkD_SisGz0

-Chuckk

🔗Carl Lumma <carl@...>

9/9/2009 12:45:46 PM

I downloaded it, but subsequently deleted it.
Waiting for the installer promised in the readme.

-Carl

At 12:44 PM 9/9/2009, you wrote:
>I posted a new version of Rationale recently, with lots of fixes. It
>works great here (on Linux).
>I see no one has downloaded it, and there haven't been too many views
>of the tutorial vid I posted here either. Has anyone had trouble
>getting Rationale working? Is the interface not useable? I'm guessing
>the absence of MIDI is a problem for most, no? Perhaps not having
>support for non-just tunings?
>Here's Rationale:
>http://rationale.sourceforge.net/
>and the tutorial vid:
>http://www.youtube.com/watch?v=UEkD_SisGz0
>
>-Chuckk

🔗touchedchuckk <BadMuthaHubbard@...>

9/9/2009 12:58:28 PM

OK, thanks.
I'm waiting too, for the Windows Csound version to be fixed, actually. I forgot that small detail; there's an annoying error message that's out of my control. I just got done checking the Csound list to see if they fixed it, too.
-Chuckk

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> I downloaded it, but subsequently deleted it.
> Waiting for the installer promised in the readme.
>
> -Carl
>
> At 12:44 PM 9/9/2009, you wrote:
> >I posted a new version of Rationale recently, with lots of fixes. It
> >works great here (on Linux).
> >I see no one has downloaded it, and there haven't been too many views
> >of the tutorial vid I posted here either. Has anyone had trouble
> >getting Rationale working? Is the interface not useable? I'm guessing
> >the absence of MIDI is a problem for most, no? Perhaps not having
> >support for non-just tunings?
> >Here's Rationale:
> >http://rationale.sourceforge.net/
> >and the tutorial vid:
> >http://www.youtube.com/watch?v=UEkD_SisGz0
> >
> >-Chuckk
>

🔗Carl Lumma <carl@...>

9/9/2009 1:10:37 PM

By the way, I have two rules:

1. I never install cygwin for any reason.
2. I never download Csound for any reason.

So if I'm going to use Rationale (and I'd sure like to), it's
going to have to come with Csound and without cygwin. Think
that'll be possible?

Alternatively, I have a Mac, so that takes care of #1 if it's
a problem. However, xterm has its own issues if you're
using that.

-Carl

At 12:58 PM 9/9/2009, you wrote:
>OK, thanks.
>I'm waiting too, for the Windows Csound version to be fixed, actually.
>I forgot that small detail; there's an annoying error message that's
>out of my control. I just got done checking the Csound list to see if
>they fixed it, too.
>-Chuckk
>
>
>--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>>
>> I downloaded it, but subsequently deleted it.
>> Waiting for the installer promised in the readme.
>>
>> -Carl
>>
>> At 12:44 PM 9/9/2009, you wrote:
>> >I posted a new version of Rationale recently, with lots of fixes. It
>> >works great here (on Linux).
>> >I see no one has downloaded it, and there haven't been too many views
>> >of the tutorial vid I posted here either. Has anyone had trouble
>> >getting Rationale working? Is the interface not useable? I'm guessing
>> >the absence of MIDI is a problem for most, no? Perhaps not having
>> >support for non-just tunings?
>> >Here's Rationale:
>> >http://rationale.sourceforge.net/
>> >and the tutorial vid:
>> >http://www.youtube.com/watch?v=UEkD_SisGz0
>> >
>> >-Chuckk
>>
>

🔗Aaron Johnson <aaron@...>

9/9/2009 2:10:06 PM

Hey Carl, curious, do you not like Csound, or do you find the install
a bitch (it sure can be), or both?

On Wed, Sep 9, 2009 at 3:10 PM, Carl Lumma <carl@...> wrote:
> By the way, I have two rules:
>
> 1. I never install cygwin for any reason.
> 2. I never download Csound for any reason.
>
> So if I'm going to use Rationale (and I'd sure like to), it's
> going to have to come with Csound and without cygwin.  Think
> that'll be possible?
>
> Alternatively, I have a Mac, so that takes care of #1 if it's
> a problem.  However, xterm has its own issues if you're
> using that.
>
> -Carl
>
> At 12:58 PM 9/9/2009, you wrote:
>>OK, thanks.
>>I'm waiting too, for the Windows Csound version to be fixed, actually.
>>I forgot that small detail; there's an annoying error message that's
>>out of my control. I just got done checking the Csound list to see if
>>they fixed it, too.
>>-Chuckk
>>
>>
>>--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>>>
>>> I downloaded it, but subsequently deleted it.
>>> Waiting for the installer promised in the readme.
>>>
>>> -Carl
>>>
>>> At 12:44 PM 9/9/2009, you wrote:
>>> >I posted a new version of Rationale recently, with lots of fixes. It
>>> >works great here (on Linux).
>>> >I see no one has downloaded it, and there haven't been too many views
>>> >of the tutorial vid I posted here either. Has anyone had trouble
>>> >getting Rationale working? Is the interface not useable? I'm guessing
>>> >the absence of MIDI is a problem for most, no? Perhaps not having
>>> >support for non-just tunings?
>>> >Here's Rationale:
>>> >http://rationale.sourceforge.net/
>>> >and the tutorial vid:
>>> >http://www.youtube.com/watch?v=UEkD_SisGz0
>>> >
>>> >-Chuckk
>>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

🔗Carl Lumma <carl@...>

9/9/2009 4:00:39 PM

Both. Csound is powerful, but I don't think it even registers
as a musical instrument. People like Prent Rodgers and Dave Seidel
get fantastic results out of it, but I'm guessing they have a lot
of patience. If you're going to use it as a backend, my advice
is to hide it.

-Carl

At 02:10 PM 9/9/2009, you wrote:
>Hey Carl, curious, do you not like Csound, or do you find the install
>a bitch (it sure can be), or both?
>
>
>On Wed, Sep 9, 2009 at 3:10 PM, Carl Lumma <carl@...> wrote:
>> By the way, I have two rules:
>>
>> 1. I never install cygwin for any reason.
>> 2. I never download Csound for any reason.
>>
>> So if I'm going to use Rationale (and I'd sure like to), it's
>> going to have to come with Csound and without cygwin. Think
>> that'll be possible?
>>
>> Alternatively, I have a Mac, so that takes care of #1 if it's
>> a problem. However, xterm has its own issues if you're
>> using that.
>>
>> -Carl
>>
>> At 12:58 PM 9/9/2009, you wrote:
>>>OK, thanks.
>>>I'm waiting too, for the Windows Csound version to be fixed, actually.
>>>I forgot that small detail; there's an annoying error message that's
>>>out of my control. I just got done checking the Csound list to see if
>>>they fixed it, too.
>>>-Chuckk
>>>
>>>
>>>--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>>>>
>>>> I downloaded it, but subsequently deleted it.
>>>> Waiting for the installer promised in the readme.
>>>>
>>>> -Carl
>>>>
>>>> At 12:44 PM 9/9/2009, you wrote:
>>>> >I posted a new version of Rationale recently, with lots of fixes. It
>>>> >works great here (on Linux).
>>>> >I see no one has downloaded it, and there haven't been too many views
>>>> >of the tutorial vid I posted here either. Has anyone had trouble
>>>> >getting Rationale working? Is the interface not useable? I'm guessing
>>>> >the absence of MIDI is a problem for most, no? Perhaps not having
>>>> >support for non-just tunings?
>>>> >Here's Rationale:
>>>> >http://rationale.sourceforge.net/
>>>> >and the tutorial vid:
>>>> >http://www.youtube.com/watch?v=UEkD_SisGz0
>>>> >
>>>> >-Chuckk
>>>>
>>>
>>
>>
>>
>
>
>--
>
>Aaron Krister Johnson
>http://www.akjmusic.com
>http://www.untwelve.org
>

🔗Cody Hallenbeck <codyhallenbeck@...>

9/9/2009 7:57:08 PM

Hi Chuckk,
I'm really excited about Rationale, and am very grateful for the work you've
done on it. I'm mostly waiting for it to be installable on OSX 10.5 (or even
10.6). It's just way more work than I am able to do to get csound working
with python. My understanding is this problem should be eliminated with the
release of csound 5.11. I should try the Windows version again, or get Linux
running in a virtual machine.

I also haven't learned csound, so I don't know how to get good timbres
through it. The last time I tried the windows version, the soundfont support
was poor with the instruments I tried. In particular, it was, for whatever
reason, only using one sample for the entire range of each instrument. I do
have explicit plans for learning supercollider, though, so I can hopefully
get something I like going with the osc support.

MIDI support would be pretty useful, to be honest. I found working with the
MIDI output of jisequencer entirely acceptable, for what it's worth. The 16
note polyphony it provided was adequate for what I was doing, and I could
even use multiple instruments if whatever synth I was using could load them
fast enough. For example, I was routing jisequencer into the OSX application
MidiPipe, which was set up to act as a soundfont synth. I was doing this was
rather large sample sets and it still worked OK.

For what it's worth, I don't think I'd use non-ji tunings much in Rationale.
Maybe you could find a way to allow any interval to be in the note bank. It
could be interesting, for example, to be able to define equal tempered and
just ratios in a piece. You could, for example, write a piece for equal
tempered guitar or piano and the harmonics of those strings. You could also
do neat things like define golden means. Again, this stuff is all cool, but
personally I just want the program to work well with just intonation.

Again, thanks for all your hard work. The general concept of
jisequencer/rationale is the most intuitive way for me, at least, to
approach extended just intonation, so I hope to be using it in the future.

On Wed, Sep 9, 2009 at 12:44 PM, touchedchuckk
<BadMuthaHubbard@...>wrote:

>
>
> I posted a new version of Rationale recently, with lots of fixes. It works
> great here (on Linux).
> I see no one has downloaded it, and there haven't been too many views of
> the tutorial vid I posted here either. Has anyone had trouble getting
> Rationale working? Is the interface not useable? I'm guessing the absence of
> MIDI is a problem for most, no? Perhaps not having support for non-just
> tunings?
> Here's Rationale:
> http://rationale.sourceforge.net/
> and the tutorial vid:
> http://www.youtube.com/watch?v=UEkD_SisGz0
>
> -Chuckk
>
>
>

[Non-text portions of this message have been removed]

🔗touchedchuckk <BadMuthaHubbard@...>

9/9/2009 10:53:25 PM

Hi Cody.

I found out from the Csound devs that the 10.4 Intel binary should work with Python IF you install Python 2.5 from http://pythonmac.org/packages/py25-fat/dmg/python-2.5-macosx.dmg
I can't test it as I don't have 10.5.

I haven't forgotten what you said about the soundfonts. Csound has Fluidsynth opcodes I could try; do you have a soundfont that shows the difference well and will fit in an email to me?

Thanks once again for the feedback.
-Chuckk

--- In MakeMicroMusic@yahoogroups.com, Cody Hallenbeck <codyhallenbeck@...> wrote:
>
> Hi Chuckk,
> I'm really excited about Rationale, and am very grateful for the work you've
> done on it. I'm mostly waiting for it to be installable on OSX 10.5 (or even
> 10.6). It's just way more work than I am able to do to get csound working
> with python. My understanding is this problem should be eliminated with the
> release of csound 5.11. I should try the Windows version again, or get Linux
> running in a virtual machine.
>
> I also haven't learned csound, so I don't know how to get good timbres
> through it. The last time I tried the windows version, the soundfont support
> was poor with the instruments I tried. In particular, it was, for whatever
> reason, only using one sample for the entire range of each instrument. I do
> have explicit plans for learning supercollider, though, so I can hopefully
> get something I like going with the osc support.
>
> MIDI support would be pretty useful, to be honest. I found working with the
> MIDI output of jisequencer entirely acceptable, for what it's worth. The 16
> note polyphony it provided was adequate for what I was doing, and I could
> even use multiple instruments if whatever synth I was using could load them
> fast enough. For example, I was routing jisequencer into the OSX application
> MidiPipe, which was set up to act as a soundfont synth. I was doing this was
> rather large sample sets and it still worked OK.
>
> For what it's worth, I don't think I'd use non-ji tunings much in Rationale.
> Maybe you could find a way to allow any interval to be in the note bank. It
> could be interesting, for example, to be able to define equal tempered and
> just ratios in a piece. You could, for example, write a piece for equal
> tempered guitar or piano and the harmonics of those strings. You could also
> do neat things like define golden means. Again, this stuff is all cool, but
> personally I just want the program to work well with just intonation.
>
> Again, thanks for all your hard work. The general concept of
> jisequencer/rationale is the most intuitive way for me, at least, to
> approach extended just intonation, so I hope to be using it in the future.
>
> On Wed, Sep 9, 2009 at 12:44 PM, touchedchuckk
> <BadMuthaHubbard@...>wrote:
>
> >
> >
> > I posted a new version of Rationale recently, with lots of fixes. It works
> > great here (on Linux).
> > I see no one has downloaded it, and there haven't been too many views of
> > the tutorial vid I posted here either. Has anyone had trouble getting
> > Rationale working? Is the interface not useable? I'm guessing the absence of
> > MIDI is a problem for most, no? Perhaps not having support for non-just
> > tunings?
> > Here's Rationale:
> > http://rationale.sourceforge.net/
> > and the tutorial vid:
> > http://www.youtube.com/watch?v=UEkD_SisGz0
> >
> > -Chuckk
> >
> >
> >
>
>
> [Non-text portions of this message have been removed]
>

🔗touchedchuckk <BadMuthaHubbard@...>

9/9/2009 11:02:26 PM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> By the way, I have two rules:
>
> 1. I never install cygwin for any reason.
> 2. I never download Csound for any reason.

Rule 1 is mine as well, but I have the opposite of Rule 2: I always download Csound for any reason.

> So if I'm going to use Rationale (and I'd sure like to), it's
> going to have to come with Csound and without cygwin. Think
> that'll be possible?

I think so. There's a package that can bundle all the libraries together into a single executable, supposedly. Once the Csound devs work out some Windows stuff, it should be doable.

>
> Alternatively, I have a Mac, so that takes care of #1 if it's
> a problem. However, xterm has its own issues if you're
> using that.

If your Mac is as fast as your Windows machine, it ought to work (I can help). If not, probably not worth it.
Thanks for the feedback.

-Chuckk

>
> -Carl
>
> At 12:58 PM 9/9/2009, you wrote:
> >OK, thanks.
> >I'm waiting too, for the Windows Csound version to be fixed, actually.
> >I forgot that small detail; there's an annoying error message that's
> >out of my control. I just got done checking the Csound list to see if
> >they fixed it, too.
> >-Chuckk
> >
> >
> >--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@> wrote:
> >>
> >> I downloaded it, but subsequently deleted it.
> >> Waiting for the installer promised in the readme.
> >>
> >> -Carl
> >>
> >> At 12:44 PM 9/9/2009, you wrote:
> >> >I posted a new version of Rationale recently, with lots of fixes. It
> >> >works great here (on Linux).
> >> >I see no one has downloaded it, and there haven't been too many views
> >> >of the tutorial vid I posted here either. Has anyone had trouble
> >> >getting Rationale working? Is the interface not useable? I'm guessing
> >> >the absence of MIDI is a problem for most, no? Perhaps not having
> >> >support for non-just tunings?
> >> >Here's Rationale:
> >> >http://rationale.sourceforge.net/
> >> >and the tutorial vid:
> >> >http://www.youtube.com/watch?v=UEkD_SisGz0
> >> >
> >> >-Chuckk
> >>
> >
>

🔗Graham Breed <gbreed@...>

9/10/2009 2:16:06 AM

touchedchuckk wrote:

> I haven't forgotten what you said about the soundfonts.
> Csound has Fluidsynth opcodes I could try; do you have a
> soundfont that shows the difference well and will fit in
> an email to me?

The Fluid opcodes are not microtonal friendly. But somebody said recently that the stand-alone FluidSynth does have tuning tables. So you could get some microtonality by exposing them to Csound -- but still not the ideal solution for JI.

There are some other SoundFont opcodes but they have their own problems.

Graham

p.s. I'm interested in what you're doing but not actively following it now.

🔗Carl Lumma <carl@...>

9/10/2009 8:44:52 AM

>If your Mac is as fast as your Windows machine,

It is.

>it ought to work (I
>can help). If not, probably not worth it.

Well, I'm recommending you make single-click (or single-drag)
installers for both platforms. Rationale is well-designed and
deserves it.

I'll second soundfont support, but it's not a dealbreaker
for me. I'd rather have it be released that much sooner,
with a couple of basic, clean, synthy presets.

-Carl

🔗Aaron Johnson <aaron@...>

9/10/2009 11:44:34 AM

Carl,

Csound already supports soundfonts, so I don't see why this is an
issue. The older depricated ones in particular are very microtonally
flexible--the newer ones (fluid opcodes) are not, however.

But can't someone just create their own orchestra files and use them
through rationale?

BTW, Carl, you probably relate to Prent's use of Csound b/c of his use
of samples. My point being that Csound is what you make of it.
Whatever sounds you fancy are theoretically possible, if you know what
you are doing. But yes, it takes patience and a sense of what makes
good electronic 'orchestration'. In that respect, it's as unforgiving
as a real orchestra.

There are very few software packages that I can think of that offer
oscillators, FM, sampling and SoundFonts, physical modelling,
additive, scanned synthesis, dozens of noise types, phase vocoding,
LPC, a whole suite of DSP tools, dozens of filters, arbitrarily
designable envelopes, realtime and non-realtime rendering, and utter
flexibility and microtonal friendliness. Complete Modular freedom.
And, not a cent does it cost. It stands alone when you look at all
these criteria.

Why isn't it used more? Because it's not as instant gratification friendly...

AKJ

On Wed, Sep 9, 2009 at 6:00 PM, Carl Lumma <carl@lumma.org> wrote:
> Both.  Csound is powerful, but I don't think it even registers
> as a musical instrument.  People like Prent Rodgers and Dave Seidel
> get fantastic results out of it, but I'm guessing they have a lot
> of patience.  If you're going to use it as a backend, my advice
> is to hide it.
>
> -Carl
>
> At 02:10 PM 9/9/2009, you wrote:
>>Hey Carl, curious, do you not like Csound, or do you find the install
>>a bitch (it sure can be), or both?
>>
>>
>>On Wed, Sep 9, 2009 at 3:10 PM, Carl Lumma <carl@lumma.org> wrote:
>>> By the way, I have two rules:
>>>
>>> 1. I never install cygwin for any reason.
>>> 2. I never download Csound for any reason.
>>>
>>> So if I'm going to use Rationale (and I'd sure like to), it's
>>> going to have to come with Csound and without cygwin.  Think
>>> that'll be possible?
>>>
>>> Alternatively, I have a Mac, so that takes care of #1 if it's
>>> a problem.  However, xterm has its own issues if you're
>>> using that.
>>>
>>> -Carl
>>>
>>> At 12:58 PM 9/9/2009, you wrote:
>>>>OK, thanks.
>>>>I'm waiting too, for the Windows Csound version to be fixed, actually.
>>>>I forgot that small detail; there's an annoying error message that's
>>>>out of my control. I just got done checking the Csound list to see if
>>>>they fixed it, too.
>>>>-Chuckk
>>>>
>>>>
>>>>--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>>>>>
>>>>> I downloaded it, but subsequently deleted it.
>>>>> Waiting for the installer promised in the readme.
>>>>>
>>>>> -Carl
>>>>>
>>>>> At 12:44 PM 9/9/2009, you wrote:
>>>>> >I posted a new version of Rationale recently, with lots of fixes. It
>>>>> >works great here (on Linux).
>>>>> >I see no one has downloaded it, and there haven't been too many views
>>>>> >of the tutorial vid I posted here either. Has anyone had trouble
>>>>> >getting Rationale working? Is the interface not useable? I'm guessing
>>>>> >the absence of MIDI is a problem for most, no? Perhaps not having
>>>>> >support for non-just tunings?
>>>>> >Here's Rationale:
>>>>> >http://rationale.sourceforge.net/
>>>>> >and the tutorial vid:
>>>>> >http://www.youtube.com/watch?v=UEkD_SisGz0
>>>>> >
>>>>> >-Chuckk
>>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>--
>>
>>Aaron Krister Johnson
>>http://www.akjmusic.com
>>http://www.untwelve.org
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

🔗Carl Lumma <carl@...>

9/10/2009 12:53:40 PM

At 11:44 AM 9/10/2009, Aaron wrote:
>Carl,
>
>Csound already supports soundfonts, so I don't see why this is an
>issue.

I didn't raise the issue.

>BTW, Carl, you probably relate to Prent's use of Csound b/c of his use
>of samples.

I relate to it because he gets great results with it. But
it's something akin to a wall-sized modular synth, which is
to say, completely uninteresting to myself and the vast majority
of musicians.

-Carl

🔗touchedchuckk <BadMuthaHubbard@...>

9/11/2009 12:42:18 PM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> >If your Mac is as fast as your Windows machine,
>
> It is.
>
> >it ought to work (I
> >can help). If not, probably not worth it.
>
> Well, I'm recommending you make single-click (or single-drag)
> installers for both platforms. Rationale is well-designed and
> deserves it.

This was one of my original reasons for rewriting it; I had created a program that ran as a Pure Data patch, and that was rickety and very few people were willing to get Pure Data working first in order to use it. Python is far better, and has py2app and py2exe. It'll happen.

>
> I'll second soundfont support, but it's not a dealbreaker
> for me. I'd rather have it be released that much sooner,
> with a couple of basic, clean, synthy presets.

There is soundfont support, although as Cody says it isn't the most sophisticated. I may have to dig into the actual Python soundfont libraries some day instead of relying on Csound's soundfont support, but that's a big job. What is not there at all yet is MIDI output.

-Chuckk

🔗touchedchuckk <BadMuthaHubbard@...>

9/11/2009 12:49:50 PM

--- In MakeMicroMusic@yahoogroups.com, Aaron Johnson <aaron@...> wrote:
>
> Carl,
>
> Csound already supports soundfonts, so I don't see why this is an
> issue. The older depricated ones in particular are very microtonally
> flexible--the newer ones (fluid opcodes) are not, however.

Rationale uses the older ones, because they work more directly; the Fluid opcodes require starting a FluidSynth engine first, and some hoops to jump through. I didn't realize they don't allow microtones.

>
> But can't someone just create their own orchestra files and use them
> through rationale?

Absolutely, that's currently the best-supported option (because it's what I do). An idea I've toyed with would be to create a small repository of Rationale-enabled Csound instruments that folks could simply plug in to their orchestras. Rationale has a few advantages, in fact: for instance, the score time and duration of a note can be passed to Csound unaltered as separate arguments, whereas in pure Csound they are automatically adjusted according to the tempo, so the orchestra only ever sees seconds, not beats.

>
> BTW, Carl, you probably relate to Prent's use of Csound b/c of his use
> of samples. My point being that Csound is what you make of it.
> Whatever sounds you fancy are theoretically possible, if you know what
> you are doing. But yes, it takes patience and a sense of what makes
> good electronic 'orchestration'. In that respect, it's as unforgiving
> as a real orchestra.

Absolutely. A real orchestra takes a lifetime to master.

> There are very few software packages that I can think of that offer
> oscillators, FM, sampling and SoundFonts, physical modelling,
> additive, scanned synthesis, dozens of noise types, phase vocoding,
> LPC, a whole suite of DSP tools, dozens of filters, arbitrarily
> designable envelopes, realtime and non-realtime rendering, and utter
> flexibility and microtonal friendliness. Complete Modular freedom.
> And, not a cent does it cost. It stands alone when you look at all
> these criteria.

I think there are others; SuperCollider, ChucK, and Pure Data all have impressive arsenals. I believe Csound, though, has accepted more of the input from its users into its canonical version, so it has a much bigger default library. It's also very well designed, though I don't understand software design well enough to say it's the best.

-Chuckk

🔗touchedchuckk <BadMuthaHubbard@...>

9/11/2009 12:52:43 PM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
> >BTW, Carl, you probably relate to Prent's use of Csound b/c of his use
> >of samples.
>
> I relate to it because he gets great results with it. But
> it's something akin to a wall-sized modular synth, which is
> to say, completely uninteresting to myself and the vast majority
> of musicians.
>
> -Carl
>

Carl, really! Most of what is interesting to you is uninteresting to the vast majority of musicians! That's something to be celebrated, IMO.
Point taken, though. Using Csound to make music can be like using a crane to play ping-pong.

-Chuckk

🔗Carl Lumma <carl@...>

9/11/2009 12:56:25 PM

Chuckk wrote:

>There is soundfont support, although as Cody says it isn't the most
>sophisticated. I may have to dig into the actual Python soundfont
>libraries some day instead of relying on Csound's soundfont support,
>but that's a big job. What is not there at all yet is MIDI output.

Like I said, I would think it prudent to ignore it for now, and
rather to build Csound into the package, and pre-roll several basic
synth patches, and expose them in the interface as presets.

What widgets are you using with Python? Have you checked out
Tartini? It's the slickest cross-platform open source music
app I've seen (C++/QT).

-Carl

🔗Carl Lumma <carl@...>

9/11/2009 12:59:59 PM

Chuckk wrote:
>Absolutely. A real orchestra takes a lifetime to master.

On the contrary, software like Synful shows that it does not.
(And the synthesis technique, by the way, is not one that can
be implemented in Csound.)

>Using Csound to make music can be like using a crane to play ping-pong.

Exactly.

-Carl

🔗Aaron Johnson <aaron@...>

9/11/2009 3:06:32 PM

> Like I said, I would think it prudent to ignore it for now, and
> rather to build Csound into the package, and pre-roll several basic
> synth patches, and expose them in the interface as presets.

This brings up an important point. Csound would really gain from some
high quality orchestras shipping with it. You install it, a battle in
itself, then you have to climb another mountain---learning how to make
a nice little set of instruments. Or collecting them from various
places on the net or CD collections.

Definitely NOT plug 'n play.

> What widgets are you using with Python?  Have you checked out
> Tartini?  It's the slickest cross-platform open source music
> app I've seen (C++/QT).

I haven't heard of Tartini. Thanks for the heads up, I'll have to check it out.

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

🔗Aaron Johnson <aaron@...>

9/11/2009 3:14:31 PM

On Fri, Sep 11, 2009 at 2:59 PM, Carl Lumma <carl@...> wrote:
> Chuckk wrote:
>>Absolutely. A real orchestra takes a lifetime to master.
>
> On the contrary, software like Synful shows that it does not.

1) A MIDI sampled orchestra is not a real orchestra. You simply cannot
pan and volume control a real orchestra. You have to know what a real
orchestra does and how it works. This takes a lifetime of study. No
accident that Brahms waited decades before premiering his 1st
symphony. By the 19th century, orchestration becomes it's own art.
2) Synful still doesn't sound incredibly realistic. Especially the
strings, which are only barely tolerable. Last I heard, it's not even
micro-capable, either.
3) Show us any piece (yours, anyone else's) done with Synful that
demonstrates orchestral mastery. And why you think it would translate
to a real orchestra....

> (And the synthesis technique, by the way, is not one that can
> be implemented in Csound.)

So? Synful can't do phase vocoding, either. What's the point of such banter?

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

🔗Carl Lumma <carl@...>

9/11/2009 4:03:19 PM

Aaron wrote:

>>>Absolutely. A real orchestra takes a lifetime to master.
>>
>> On the contrary, software like Synful shows that it does not.
>
>1) A MIDI sampled orchestra is not a real orchestra. You simply cannot
>pan and volume control a real orchestra.

Synful isn't sampled, but anyway, I wasn't saying that it's
easy to master _orchestration_. Synful does make it easy to
master orchestral synthesis, which is what Csound and (to a
lesser degree) sampled orchestras like GPO make difficult.

>> (And the synthesis technique, by the way, is not one that can
>> be implemented in Csound.)
>
>So? Synful can't do phase vocoding, either. What's the point of such banter?

Somebody said 'Csound can do anything', which isn't even true.

-Carl

🔗Aaron Johnson <aaron@...>

9/11/2009 7:58:06 PM

On Fri, Sep 11, 2009 at 6:03 PM, Carl Lumma <carl@...> wrote:
> Aaron wrote:
>
>>>>Absolutely. A real orchestra takes a lifetime to master.
>>>
>>> On the contrary, software like Synful shows that it does not.
>>
>>1) A MIDI sampled orchestra is not a real orchestra. You simply cannot
>>pan and volume control a real orchestra.
>
> Synful isn't sampled, but anyway, I wasn't saying that it's
> easy to master _orchestration_.  Synful does make it easy to
> master orchestral synthesis, which is what Csound and (to a
> lesser degree) sampled orchestras like GPO make difficult.

I'm not sure I follow what you mean.

re:synful---are you saying no recording of real instruments is
involved in that process?

>>> (And the synthesis technique, by the way, is not one that can
>>> be implemented in Csound.)
>>
>>So? Synful can't do phase vocoding, either. What's the point of such banter?
>
> Somebody said 'Csound can do anything', which isn't even true.

I don't know where that was said, but I certainly did say Csound
packed a lot under it's hood for the buck...it covers the standard
textbook synthesis styles and then some....of course something like
Synful which is a patented new process is an exception.

> -Carl
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

🔗Daniel Forro <dan.for@...>

9/11/2009 8:42:09 PM

On 12 Sep 2009, at 7:14 AM, Aaron Johnson wrote:
> On Fri, Sep 11, 2009 at 2:59 PM, Carl Lumma <carl@...> wrote:
> > Chuckk wrote:
> >>Absolutely. A real orchestra takes a lifetime to master.
> >
> > On the contrary, software like Synful shows that it does not.
>
> 1) A MIDI sampled orchestra is not a real orchestra. You simply cannot
> pan and volume control a real orchestra. You have to know what a real
> orchestra does and how it works. This takes a lifetime of study. No
> accident that Brahms waited decades before premiering his 1st
> symphony.
>

I don't know what was behind Brahms' waiting for premiere... BTW,
Brahms is not good example of a composer who could write well for
orchestra... same as Chopin, Liszt, Rachmaninov, Prokofjev... From
all those good pianists (who unfortunately used orchestra as a piano
and piano as an orchestra) only Debussy was able to forget his
"pianism" when writing for orchestra. Similar problem had Franck,
Bruckner, Reger... with their "organism" :-)

> By the 19th century, orchestration becomes it's own art.
>
> Aaron Krister Johnson
>

And as well it was frozen there and deeply connected with traditional
music. In my opinion classical symphonic orchestra is a very
problematic conglomerate of instruments, it can't work well. In fact
it's dead since the times of Mahler, Richard Strauss, Holst or
Stravinski Russian period, which is about 100 years ago. This
orchestra is good only for performing that historical music. So
called mastery of orchestration means just to keep some basic rules
which force composer to do lot of compromises in music itself.
I personally didn't have much desire to write for orchestra, but
there were some opportunities and duties during my study at Music
Academy, and later some commissions. Because orchestra is an
established music institution, "ready made", and when composer is
obedient and keeps all rules, his work will be performed (same like
writing music for piano trio, string quartet or wind quintet -
there's a big chance it will be performed as there are many of such
groups, but for which price? They perform mainly historical music and
are unable to play well contemporary music. Composer can't do
miracles, music will sound the same like many others. With some
exceptions, of course, like Kronos quartet...). So I did, and tried
to do my best and find my ways. During writing more orchestral pieces
(all were written and performed between 1980 - 2005) - last one being
"Double concerto for pipe organ and drums" where I performed organ
part - I thought a lot about orchestra and its role in the
contemporary music. I published a large study in Czech music magazine
on this. Just a few points:

- To get a good sound from it, you have to write rather traditional
music, because a lot of acoustic rules must be kept, especially when
using tutti. But there's no reason just to copy Mahler, Strauss,
Debussy, Ravel or Stravinski... They have used and exhausted all
possible combinations. It's difficult if not impossible to invent
anything new with classical orchestra. So it's loss of time to study
this classical orchestration art. Despite this it's good to study and
to be able to use it, just for purpose of some commission for
functional music. But this knowledge is not too helpful when
composing autonomous New music, only to know, how NOT to use orchestra.

- To get really contemporary sound from orchestra is not far from
impossible. Result will be only bad sound, always very similar, only
bad in a different way - it was exhausted in 50/60ies. Result will be
otherwise Darmstadt overorganized multiserialism chaos of 50ies or
Polish school random aleatorics chaos of 60ies. I met some totally
unable wannabe contemporary composers doing this and declaring that
exactly this sound they wanted (trying to hide their inability to
write well for orchestra and find anything new).

- Instruments in orchestra are not well balanced. When I'll
exaggerate slightly the problem, three flutes in "ff" are nothing
against one trumpet in "p". Then all that masking effect, and take
into the account unbalanced ranges of acoustic instruments and its
narrow connection to dynamics (usually it's not possible to play ff
in low range and pp in upper range). Nothing to say about drums and
percussions - for contemporary music we need a lot more then
triangle, piatti, timpani, snare and kick drum. There's too much of
strings, which are usually the most problematic performers concerning
rhythm... Some instruments are there just for visual effect, like
harp... And lot of useful instruments missing. Bass range is totally
unsatisfying.
I personally solved some of these problems by using whole blocks of
instruments together (woodwinds, brass, strings, drums and other...),
which is of course limitation.

- Different way of creating sound means different articulation,
phrasing, rhythmic precision... Instruments don't glue well together.
Yes, it can become advantage, but normally it's not good. Because to
get good and natural sound the composer is forced to use instruments
and instrumental groups in their most "typical" textures and roles,
which were overused in the past. Any other using doesn't sound well,
and makes all those traditionally educated performers very unhappy.
They can't show in such contemporary music their "empty" virtuosity
based on the scales and broken chords :-)

- Another very special problem is tuning, which I don't need to
analyse too much here as everybody knows well...

- Nothing to say about many other things used in contemporary music,
which traditionally educated orchestral performers (used to perform
mainly historical music) can't do well, be it special ways of
playing, microtones, polyrhythms, polytempos, jazz feeling (for
example) etc. etc..

- Orchestra can sound well only when recorded in studio, not under
normal conditions in the concert halls. Then multi mike technique canbe used (yes, I know, now it's again fashionable to do contact
recording with one stereo mike, but that has no sense in my opinion -
if we have such good technology, why not to use it fully?) to balance
artificially instruments and groups artificially. Then also some
other instruments can be added which is normally (that means purely
acoustically without amplification) impossible (whole plucked group,
ethnic instruments, lot of special small drums...). We can use also
audio processing, unusual and until now unused sound effects, use
artificial reverb, to use sound microscoping (by amplyfing very low
volume sounds)... Only this way can orchestra become again
unbelievably creative tool with unlimited palette of sounds and their
combinations. Even using of electronic instruments in such orchestra
is possible, and everything can be well mixed together. Nothing to say about possibility of recording certain sounds, or atmospheres,
process them electronically, use them as samples and mix them again
with real record... Unlimited possibilities in the domain of
acoustics and electronics. But could any of you imagine existence of
such experimental studio orchestra as a stable institution? It would
be too nice... UNESCO should establish it, one such orchestra in the
world would be enough :-) in the beginning, reserved only for
experiments, film music, selling records of such music... Pure Utopia.

- That means, when we think further in this direction, there's only
one practical solution nowadays, thanks to technology: MIDI orchestra
with sampled, physically modelled or FM or additive harmonic
synthesis.... or more ... imitated acoustic sounds plus using also electronic colors of all kind. This can even sound much better then
real orchestra after some effort (don't forget Dolby Surround). Of
course it is possible to do all necessary controller changes (pitch
bend, vibrato, tremolo, volume, expression, panorama + stereo width,
sharpness of sound, resonance, attack, release, delay, echo,
reverb...), using different articulation (thanks to many kind of
sampled instruments). Yes, it's a question if we can still call such
body an orchestra - but the most important is it works for
contemporary music (at least for me). Besides musician (composer) is
independent on all those issues connected with real orchestra as a
music institution... We can use any microtuning impossible to get from real performers and their limited instruments... We can get
unheard sound colors because we can use instruments out of their
usual range (like contrabass piccolo, or three octaves higher tuba)
and dynamics limitations, nothing to say about fast playing without
taking a breath, impossible trills, tremolos, jumps, polyphony (with
MIDI there's no problem to get 64 flutes together in full piano
range), glissandos, unheard articulation and sound combinations and
their mixing and shadowing... And more creative possibilities to
change natural sound colors or attack and release phase of envelope,
or all those specialties of physical modelling - creating new
syncoustic virtual instruments like floboe, vioflute, flumbone,
claritar, harpsax or so...

OK, back to my piano now...

Daniel Forró

🔗Carl Lumma <carl@...>

9/12/2009 12:34:08 AM

>> Synful isn't sampled, but anyway, I wasn't saying that it's
>> easy to master _orchestration_. Synful does make it easy to
>> master orchestral synthesis, which is what Csound and (to a
>> lesser degree) sampled orchestras like GPO make difficult.
>
>I'm not sure I follow what you mean.

Csound is a programming language, GPO is a MIDI instrument.
See the difference?

But GPO is still more difficult than Synful, because to make
it not sound like crap you have to dress up the MIDI just so.
Synful does this for you by leveraging a ton of performance
metadata.

>re:synful---are you saying no recording of real instruments is
>involved in that process?

It's an additive synthesizer, but recordings are involved.

-Carl

🔗Aaron Johnson <aaron@...>

9/12/2009 7:35:48 AM

On Sat, Sep 12, 2009 at 2:34 AM, Carl Lumma <carl@...> wrote:
>>> Synful isn't sampled, but anyway, I wasn't saying that it's
>>> easy to master _orchestration_.  Synful does make it easy to
>>> master orchestral synthesis, which is what Csound and (to a
>>> lesser degree) sampled orchestras like GPO make difficult.
>>
>>I'm not sure I follow what you mean.
>
> Csound is a programming language, GPO is a MIDI instrument.
> See the difference?
>
> But GPO is still more difficult than Synful, because to make
> it not sound like crap you have to dress up the MIDI just so.
> Synful does this for you by leveraging a ton of performance
> metadata.

I think we've been talking about different things. I was talking about
orchestration as an art, Chuckk said "absolutely, a real orchestra
take a lifetime to master", and then you said "no, Synful shows that
it doesn't"...now it seems like you've baited and switched, and are
noe talking about how easy Synful makes it to sound like an orchestra,
which is not what I was talking about, nor what I believe Chuckk was
talking about!

>
>>re:synful---are you saying no recording of real instruments is
>>involved in that process?
>
> It's an additive synthesizer, but recordings are involved.

Right, so it's _resynthesis_, which still involves sampling an
instrument at the front end. My point, more accurately, was "no
sampling or resynthesis process at the moment can ever sound as
convincingly 'real' as a real instrument"...which is not to say that
they can't be expressive or useful in their own right....

AKJ

-----------------------------------
>
> Yahoo! Groups Links
>
>
>
>

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

🔗Aaron Johnson <aaron@...>

9/12/2009 7:29:12 AM

On Fri, Sep 11, 2009 at 10:42 PM, Daniel Forro <dan.for@...> wrote:
>
> I don't know what was behind Brahms' waiting for premiere... BTW,
> Brahms is not good example of a composer who could write well for
> orchestra... same as Chopin, Liszt, Rachmaninov, Prokofjev... From
> all those good pianists (who unfortunately used orchestra as a piano
> and piano as an orchestra) only Debussy was able to forget his
> "pianism" when writing for orchestra. Similar problem had Franck,
> Bruckner, Reger... with their "organism" :-)

Bullshit. This and many others are memes propagated like knee-jerk
reactions, and without foundation. What, exactly, does it mean to
"write for the orchestra like it's a piano"??? As if there were a
giant sustain pedal for one, or a waltz figure being bass-note, chord,
chord meant that that betrayed the composer's 'keyboard centricity'.

Many people level this criticism at Schumann, too. I've heard all the
same Schumann cliches. Few of these people, if any, could ever write
anything touching the slow movement of the 2nd symphony,
orchestrationally or otherwise.

When I next listen to the glorious Brahms 4th passacaglia, and hear
the wonderful major variation with the trombones, I'll think of your
comment and laugh to myself.
:) I trust my ears, and the hairs on my arms, thank you very much. :)

> And as well it was frozen there and deeply connected with traditional
> music. In my opinion classical symphonic orchestra is a very
> problematic conglomerate of instruments, it can't work well.

I agree and disagree. It's been used well and the past, and is used
well by some now....is it 'new'??? Are new combinations possible???
Maybe not.

> In fact
> it's dead since the times of Mahler, Richard Strauss, Holst or
> Stravinski Russian period, which is about 100 years ago. This
> orchestra is good only for performing that historical music. So
> called mastery of orchestration means just to keep some basic rules
> which force composer to do lot of compromises in music itself.

Maybe this is akin to saying that nothing good comes of knowing how to
write counterpoint in the 14th century style, which I suppose many
would consider 'uesless'. I wouldn't. See below re:compromises,
realting to MIDI, etc.

> I personally didn't have much desire to write for orchestra, but
> there were some opportunities and duties during my study at Music
> Academy, and later some commissions. Because orchestra is an
> established music institution, "ready made", and when composer is
> obedient and keeps all rules, his work will be performed (same like
> writing music for piano trio, string quartet or wind quintet -
> there's a big chance it will be performed as there are many of such
> groups, but for which price? They perform mainly historical music and
> are unable to play well contemporary music.

This is true. As for contemporary playing ability, that depends on the
score and the orchestra involved. Are we talking Fernyhough or Reich
here?

> Composer can't do
> miracles, music will sound the same like many others. With some
> exceptions, of course, like Kronos quartet...). So I did, and tried
> to do my best and find my ways. During writing more orchestral pieces
> (all were written and performed between 1980 - 2005) - last one being
> "Double concerto for pipe organ and drums" where I performed organ
> part - I thought a lot about orchestra and its role in the
> contemporary music. I published a large study in Czech music magazine
> on this. Just a few points:
>
> - To get a good sound from it, you have to write rather traditional
> music, because a lot of acoustic rules must be kept, especially when
> using tutti. But there's no reason just to copy Mahler, Strauss,
> Debussy, Ravel or Stravinski... They have used and exhausted all
> possible combinations. It's difficult if not impossible to invent
> anything new with classical orchestra. So it's loss of time to study
> this classical orchestration art. Despite this it's good to study and
> to be able to use it, just for purpose of some commission for
> functional music. But this knowledge is not too helpful when
> composing autonomous New music, only to know, how NOT to use orchestra.

There is a kind of anxiety of 'newness' in Western culture that
perhaps ought be reconsidered. Nothing is ever truly new, first of
all. Secondly, good music comes from a dialog with tradition. We
needn't think that being in dialog with tradition is a bad thing.

> - To get really contemporary sound from orchestra is not far from
> impossible. Result will be only bad sound, always very similar, only
> bad in a different way - it was exhausted in 50/60ies. Result will be
> otherwise Darmstadt overorganized multiserialism chaos of 50ies or
> Polish school random aleatorics chaos of 60ies. I met some totally
> unable wannabe contemporary composers doing this and declaring that
> exactly this sound they wanted (trying to hide their inability to
> write well for orchestra and find anything new).

Now you're talking! ;)

> - Instruments in orchestra are not well balanced. When I'll
> exaggerate slightly the problem, three flutes in "ff" are nothing
> against one trumpet in "p". Then all that masking effect, and take
> into the account unbalanced ranges of acoustic instruments and its
> narrow connection to dynamics (usually it's not possible to play ff
> in low range and pp in upper range). Nothing to say about drums and
> percussions - for contemporary music we need a lot more then
> triangle, piatti, timpani, snare and kick drum. There's too much of
> strings, which are usually the most problematic performers concerning
> rhythm... Some instruments are there just for visual effect, like
> harp... And lot of useful instruments missing. Bass range is totally
> unsatisfying.
> I personally solved some of these problems by using whole blocks of
> instruments together (woodwinds, brass, strings, drums and other...),
> which is of course limitation.

Isn't this part of the art...limitation? In fact, someone named Daniel
Forro once went on and on about working with MIDI and all its
limitations, and said that this is part of the art and skill of
composing. Wait---that was YOU? ;)

> - Nothing to say about many other things used in contemporary music,
> which traditionally educated orchestral performers (used to perform
> mainly historical music) can't do well, be it special ways of
> playing, microtones, polyrhythms, polytempos, jazz feeling (for
> example) etc. etc..

I don't think anyone in the Chicago symphony has these
problems....maybe the jazz part somewhat....

> - Orchestra can sound well only when recorded in studio, not under
> normal conditions in the concert halls. Then multi mike technique can
> be used (yes, I know, now it's again fashionable to do contact
> recording with one stereo mike, but that has no sense in my opinion -
> if we have such good technology, why not to use it fully?) to balance
> artificially instruments and groups artificially. Then also some
> other instruments can be added which is normally (that means purely
> acoustically without amplification)  impossible (whole plucked group,
> ethnic instruments, lot of special small drums...). We can use also
> audio processing, unusual and until now unused sound effects, use
> artificial reverb, to use sound microscoping (by amplyfing very low
> volume sounds)... Only this way can orchestra become again
> unbelievably creative tool with unlimited palette of sounds and their
> combinations. Even using of electronic instruments in such orchestra
> is possible, and everything can be well mixed together. Nothing to
> say about possibility of recording certain sounds, or atmospheres,
> process them electronically, use them as samples and mix them again
> with real record... Unlimited possibilities in the domain of
> acoustics and electronics. But could any of you imagine existence of
> such experimental studio orchestra as a stable institution? It would
> be too nice... UNESCO should establish it, one such orchestra in the
> world would be enough :-) in the beginning, reserved only for
> experiments, film music, selling records of such music... Pure Utopia.

Yes, I agree...this is a new avenue to explore...Takemitsu would have loved it.
Speaking of Takemitsu, there's someone who did new things with an orchestra.

> - That means, when we think further in this direction, there's only
> one practical solution nowadays, thanks to technology: MIDI orchestra
> with sampled, physically modelled or FM or additive harmonic
> synthesis.... or more ... imitated acoustic sounds plus using also
> electronic colors of all kind. This can even sound much better then
> real orchestra after some effort (don't forget Dolby Surround). Of
> course it is possible to do all necessary controller changes (pitch
> bend, vibrato, tremolo, volume, expression, panorama + stereo width,
> sharpness of sound, resonance, attack, release, delay, echo,
> reverb...), using different articulation (thanks to many kind of
> sampled instruments). Yes, it's a question if we can still call such
> body an orchestra - but the most important is it works for
> contemporary music (at least for me). Besides musician (composer) is
> independent on all those issues connected with real orchestra as a
> music institution... We can use any microtuning impossible to get
> from real performers and their limited instruments... We can get
> unheard sound colors because we can use instruments out of their
> usual range (like contrabass piccolo, or three octaves higher tuba)
> and dynamics limitations, nothing to say about fast playing without
> taking a breath, impossible trills, tremolos, jumps, polyphony (with
> MIDI there's no problem to get 64 flutes together in full piano
> range), glissandos, unheard articulation and sound combinations and
> their mixing and shadowing... And more creative possibilities to
> change natural sound colors or attack and release phase of envelope,
> or all those specialties of physical modelling - creating new
> syncoustic virtual instruments like floboe, vioflute, flumbone,
> claritar, harpsax or so...

You may be right, but thus far many of the prototypical experiments in
this direction leave me unimpressed. FM has it's uses, but creating a
warm sound to bathe in isn't typically one of them. Good for making
fun of the 80s, though. :)

Many digital synthesis techniques are sterile sounding. Physical
modelling is certainly promising, but it's going to be a while before
it really rivals a real flesh and blood instrument.

AKJ

> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

🔗Carl Lumma <carl@...>

9/12/2009 6:03:56 PM

Aaron wrote:

>> Csound is a programming language, GPO is a MIDI instrument.
>> See the difference?

>I think we've been talking about different things. I was talking about
>orchestration as an art, Chuckk said "absolutely, a real orchestra
>take a lifetime to master", and then you said "no, Synful shows that
>it doesn't"...

You said Csound is as unforgiving as a real orchestra.
Any synthesizer capable of synthesizing orchestral instruments
is as unforgiving as a real orchestra, isn't it?
Csound is harder still, because it's a programming language
not an instrument. First you have to make an instrument, then
you can start to worry about orchestration.

>which is not what I was talking about, nor what I believe Chuckk was
>talking about!

If you have a point to make I would be interested to read it.

-Carl

🔗Aaron Johnson <aaron@...>

9/13/2009 11:39:22 AM

On Sat, Sep 12, 2009 at 8:03 PM, Carl Lumma <carl@...> wrote:

> You said Csound is as unforgiving as a real orchestra.

Yes, but what I more accurately should have said, as you say below, is
that it's as unforgiving as a real orchestra b/c you have to design
the instruments themselves to sound good (assuming that fits your
aesthetic for the piece, of course).

> Any synthesizer capable of synthesizing orchestral instruments
> is as unforgiving as a real orchestra, isn't it?

Not as much, since you have artificial mixing and panning on your
side. That was one of my points. Something that might work
electronically doesn't necessarily translate to a real orchestra. You
seemed to claim, at first anyway, that someone who wrote well for a
MIDI Synful setup could graduate with a degree in orchestration.

> Csound is harder still, because it's a programming language
> not an instrument.  First you have to make an instrument, then
> you can start to worry about orchestration.

That's my point. We agree. Amen.

>>which is not what I was talking about, nor what I believe Chuckk was
>>talking about!
>
> If you have a point to make I would be interested to read it.

I made it. Sorry you missed it. I was just trying to actually get you
to clarify *your* position re:orchestration of a flesh-and-blood
orchestra *not* being the same as any electronic setup.

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

🔗touchedchuckk <BadMuthaHubbard@...>

9/16/2009 4:09:37 AM

--- In MakeMicroMusic@yahoogroups.com, Aaron Johnson <aaron@...> wrote:
>
> On Fri, Sep 11, 2009 at 10:42 PM, Daniel Forro <dan.for@...> wrote:
> >
> > And as well it was frozen there and deeply connected with traditional
> > music. In my opinion classical symphonic orchestra is a very
> > problematic conglomerate of instruments, it can't work well.

One of many possible problematic conglomerates of instruments, the vast majority of which are all but forgotten. Which is to say, it evolved into that particular conglomeration.
The fact that it's not easy to use, that flutes aren't as loud as trumpets, is no proof that it's not a good tool. If I use my vacuum cleaner and my CD player at the same time, I don't hear what I want to hear either, but they both belong in my apartment.

> > - To get a good sound from it, you have to write rather traditional
> > music, because a lot of acoustic rules must be kept, especially when
> > using tutti. But there's no reason just to copy Mahler, Strauss,
> > Debussy, Ravel or Stravinski... They have used and exhausted all
> > possible combinations. It's difficult if not impossible to invent
> > anything new with classical orchestra. So it's loss of time to study
> > this classical orchestration art. Despite this it's good to study and
> > to be able to use it, just for purpose of some commission for
> > functional music. But this knowledge is not too helpful when
> > composing autonomous New music, only to know, how NOT to use orchestra.
>
> There is a kind of anxiety of 'newness' in Western culture that
> perhaps ought be reconsidered. Nothing is ever truly new, first of
> all. Secondly, good music comes from a dialog with tradition. We
> needn't think that being in dialog with tradition is a bad thing.

I'm not myself too interested in tradition, but I also perceive a disproportionate obsession with newness around me. If the only merit a work has is that it is new, it will soon lose the only thing it had to recommend it. I personally don't listen to music because it's new, and especially don't derive the most pleasure from knowing that it's new. The works in all forms of art that tend to affect me the most are those that were considered strange when they were contemporary, and continue to be considered so. They may have been influential, but they didn't end up being "one of" some group. Also consider that the person who perfects some style is not always the same person who invented it. The inventor is likely to become bored and invent something else rather than refining his one invention to greatness.

> > - That means, when we think further in this direction, there's only
> > one practical solution nowadays, thanks to technology: MIDI orchestra

Why are people so hung up on MIDI? It's OLD. It was designed for the masses, and for a limited set of styles. It is no longer the only option, and the only reason to cling to it is being unwilling to learn the alternatives. It's another self-imposed limitation.

> > with sampled, physically modelled or FM or additive harmonic
> > synthesis.... or more ... imitated acoustic sounds plus using also
> > electronic colors of all kind. This can even sound much better then
> > real orchestra after some effort (don't forget Dolby Surround). Of
> > course it is possible to do all necessary controller changes (pitch
> > bend, vibrato, tremolo, volume, expression, panorama + stereo width,
> > sharpness of sound, resonance, attack, release, delay, echo,
> > reverb...), using different articulation (thanks to many kind of
> > sampled instruments). Yes, it's a question if we can still call such
> > body an orchestra - but the most important is it works for
> > contemporary music (at least for me). Besides musician (composer) is
> > independent on all those issues connected with real orchestra as a
> > music institution... We can use any microtuning impossible to get
> > from real performers and their limited instruments... We can get
> > unheard sound colors because we can use instruments out of their
> > usual range (like contrabass piccolo, or three octaves higher tuba)
> > and dynamics limitations, nothing to say about fast playing without
> > taking a breath, impossible trills, tremolos, jumps, polyphony (with
> > MIDI there's no problem to get 64 flutes together in full piano
> > range), glissandos, unheard articulation and sound combinations and
> > their mixing and shadowing... And more creative possibilities to
> > change natural sound colors or attack and release phase of envelope,
> > or all those specialties of physical modelling - creating new
> > syncoustic virtual instruments like floboe, vioflute, flumbone,
> > claritar, harpsax or so...
>
>
> You may be right, but thus far many of the prototypical experiments in
> this direction leave me unimpressed. FM has it's uses, but creating a
> warm sound to bathe in isn't typically one of them. Good for making
> fun of the 80s, though. :)
>
> Many digital synthesis techniques are sterile sounding. Physical
> modelling is certainly promising, but it's going to be a while before
> it really rivals a real flesh and blood instrument.

I think I agree with Daniel on one thing, but I can only speak for myself. Once you have a new tool in your arsenal, you don't go back to not having it. I went to a mostly jazz-oriented school where the faculty would get all excited when someone commissioned a big band piece, and proudly announce, "See, it's coming back! Big band will always be around! You should learn this stuff well, because it's really in style!" But they were being selective. Most music opportunities involve some fairly sophisticated use of computers, and that will only change by replacing computers with something more advanced, not by the old tools reasserting themselves.
But I think Daniel's list is still restrictive; what you can do with flute sounds is not the most exciting part, nor imitating physical articulations, etc. Those things should be tried, sure, but computers don't force you to start with something from a real orchestra.

I don't see the dominance of 12-tET being pushed aside as easily as the dominance of the symphony orchestra; that's not so much a limitation of the tools as a limitation of the users' understanding.

-Chuckk

>
> AKJ
>
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
>
>
> --
>
> Aaron Krister Johnson
> http://www.akjmusic.com
> http://www.untwelve.org
>

🔗touchedchuckk <BadMuthaHubbard@...>

9/16/2009 4:28:09 AM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Chuckk wrote:
> >Absolutely. A real orchestra takes a lifetime to master.
>
> On the contrary, software like Synful shows that it does not.

Just to clarify, by "a real orchestra" I meant "a real orchestra".

> (And the synthesis technique, by the way, is not one that can
> be implemented in Csound.)

Csound is a Turing-complete programming language, and anything that can be done in any programming language can be done in any other. Synful appears to use database lookup and comparison of the nature of phrases; it had to be told what patterns to look for in those phrases by a human. In other words, Csound won't do that all automatically, but Synful doesn't really either, it's been told how by someone. There's a little man inside it combining all the instrument sounds...

-Chuckk

🔗touchedchuckk <BadMuthaHubbard@...>

9/16/2009 4:44:48 AM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Aaron wrote:
>
> >> Csound is a programming language, GPO is a MIDI instrument.
> >> See the difference?
>
> >I think we've been talking about different things. I was talking about
> >orchestration as an art, Chuckk said "absolutely, a real orchestra
> >take a lifetime to master", and then you said "no, Synful shows that
> >it doesn't"...
>
> You said Csound is as unforgiving as a real orchestra.
> Any synthesizer capable of synthesizing orchestral instruments
> is as unforgiving as a real orchestra, isn't it?

Any synthesizer capable of synthesizing them 100% faithfully, yes. There are moments in the Synful examples I've heard that are clearly not real orchestras. Also, it's very easy to make a real violin sound horrible. Is it as easy to make a Synful violin sound horrible? I would guess it's next to impossible, and if I'm correct, then it's not really synthesizing a violin 100% faithfully (which I also don't believe is its purpose).

> Csound is harder still, because it's a programming language
> not an instrument. First you have to make an instrument, then
> you can start to worry about orchestration.
>
> >which is not what I was talking about, nor what I believe Chuckk was
> >talking about!
>
> If you have a point to make I would be interested to read it.

My own point about a real orchestra was that you don't have to work so much harder to make Csound sound good than e.g. Stravinsky had to work to make orchestral works sound good. He spent his life on it, and worked his nuts off.
I'm a phenomenally lazy person myself, and have managed to coax what I consider to be some pretty cool sounds out of Csound.

Nothing against Synful, by the way. It appears to be a very sophisticated creation.

-Chuckk

🔗Carl Lumma <carl@...>

9/16/2009 10:43:26 AM

>> (And the synthesis technique, by the way, is not one that can
>> be implemented in Csound.)
>
>Csound is a Turing-complete programming language, and anything that
>can be done in any programming language can be done in any other.
>Synful appears to use database lookup and comparison of the nature of
>phrases; it had to be told what patterns to look for in those phrases
>by a human. In other words, Csound won't do that all automatically,
>but Synful doesn't really either, it's been told how by someone.
>There's a little man inside it combining all the instrument sounds...

Synful has already been told how, Csound hasn't. Synful's additive
synthesis is more advanced than anything that's ever been implemented
in Csound. Synful is designed for realtime use with MIDI, Csound
supports these peripherally. Try writing the database lookup in
Csound and see what the performance is like. etc. -Carl

🔗Carl Lumma <carl@...>

9/16/2009 10:58:06 AM

>Any synthesizer capable of synthesizing them 100% faithfully, yes.
>There are moments in the Synful examples I've heard that are clearly
>not real orchestras. Also, it's very easy to make a real violin sound
>horrible. Is it as easy to make a Synful violin sound horrible? I
>would guess it's next to impossible, and if I'm correct, then it's not
>really synthesizing a violin 100% faithfully (which I also don't
>believe is its purpose).

Composers and conductors do not have power over the sound quality
of individual violins, either.

>My own point about a real orchestra was that you don't have to work so
>much harder to make Csound sound good than e.g. Stravinsky had to work
>to make orchestral works sound good. He spent his life on it, and
>worked his nuts off.

Any synthetic orchestra presents all the challenges that Stravinsky
worked on, in addition to whatever computer wrangling is required.
You become composer, conductor, and instrumentalist. Stravinsky
generally only worked on the first two.

>I'm a phenomenally lazy person myself, and have managed to coax what I
>consider to be some pretty cool sounds out of Csound.

It's not a fair comparison because all of us benefit from Stravinsky's
work, just like biologists benefit from Darwin's. But I don't think
any of us are putting Stravinksy's legacy in danger just yet.

>Nothing against Synful, by the way. It appears to be a very
>sophisticated creation.

My point is, if you want to make good electronic music software that
has a chance of becoming widely used by musicians (and that happens
to be capable of microtuning), you had better hide anything to do
with Csound under a blanket. If you want to make something for
electronic music geeks, you're on the right track. I say more power
to you either way.

-Carl

🔗Chris Vaisvil <chrisvaisvil@...>

9/16/2009 12:06:30 PM

Carl,

Sort of off topic - I'm downloading the (very impressive) Synful orchestra
demo.

Is it capable of microtuning? Or does one have to resort to work arounds?

Nothing I saw on the site suggested the software can accept alternate
tunings.

Thanks,

Chris

[Non-text portions of this message have been removed]

🔗Carl Lumma <carl@...>

9/16/2009 12:38:54 PM

At 12:06 PM 9/16/2009, you wrote:
>Carl,
>
>Sort of off topic - I'm downloading the (very impressive) Synful orchestra
>demo.
>
>Is it capable of microtuning?

Not yet. I've been working on the developer for years now.
He's promised to do it but hasn't set a schedule. I think
it'll happen soon though.

-Carl

🔗Marcel de Velde <m.develde@...>

9/16/2009 12:52:19 PM

>
> Not yet. I've been working on the developer for years now.
> He's promised to do it but hasn't set a schedule. I think
> it'll happen soon though.
>

Ok thanks Carl.
Which implementation have you requested?
Scala .scl file support or Midi Tuning Standard?
Think it would help if some of us here would send them an email too?
I was just about to do this, would love to use it myself too.

-Marcel

[Non-text portions of this message have been removed]

🔗Carl Lumma <carl@...>

9/16/2009 12:58:31 PM

At 12:52 PM 9/16/2009, you wrote:
>>
>> Not yet. I've been working on the developer for years now.
>> He's promised to do it but hasn't set a schedule. I think
>> it'll happen soon though.
>>
>
>Ok thanks Carl.
>Which implementation have you requested?
>Scala .scl file support or Midi Tuning Standard?
>Think it would help if some of us here would send them an email too?
>I was just about to do this, would love to use it myself too.
>
>-Marcel

E-mail never hurts. I've suggested both .scl and MTS,
as well as adaptive tuning features, and he's interested in
all of them.

-Carl

🔗Chris Vaisvil <chrisvaisvil@...>

9/16/2009 1:07:43 PM

Well, I just emailed them to see if I could talk them into offering older
versions at a discount - $470 is really steep.

On Wed, Sep 16, 2009 at 3:52 PM, Marcel de Velde <m.develde@...>wrote:

>
>
> >
> > Not yet. I've been working on the developer for years now.
> > He's promised to do it but hasn't set a schedule. I think
> > it'll happen soon though.
> >
>
> Ok thanks Carl.
> Which implementation have you requested?
> Scala .scl file support or Midi Tuning Standard?
> Think it would help if some of us here would send them an email too?
> I was just about to do this, would love to use it myself too.
>
> -Marcel
>
> [Non-text portions of this message have been removed]
>
>
>

[Non-text portions of this message have been removed]

🔗Marcel de Velde <m.develde@...>

9/16/2009 1:13:22 PM

>
> Well, I just emailed them to see if I could talk them into offering older
> versions at a discount - $470 is really steep.
>

Perhaps we could ask about a group buy once they've implemented microtuning.
Or ask it now and speed up their urge to implement microtuning :)

-Marcel

[Non-text portions of this message have been removed]

🔗Carl Lumma <carl@...>

9/16/2009 2:43:47 PM

At 01:07 PM 9/16/2009, you wrote:
>Well, I just emailed them to see if I could talk them into offering
>older versions at a discount - $470 is really steep.

The developer is a former head of IRCAM -- one of the few
academics to successfully cross over into commercial music
software. Also, Synful is one of the only commercial products
ever to debut a novel synthesis technique (which had not
previously been demonstrated in the academic realm).

$470 isn't unreasonable compared to the massive samplers out
there. Kontakt costs $400, and most people buy expensive
samples to go with it. These products also have greater
hardware requirements.

Granted, GPO looks like a winner at $150 given that it already
supports microtuning. But, let's just say I'll refrain from
commenting on how I think it sounds.

Rather than ask Eric to lower the price, I think a good approach
would be to organize a group buy, contingent on .scl support
(at a minimum).

-Carl

🔗Carl Lumma <carl@...>

9/16/2009 2:45:31 PM

At 01:13 PM 9/16/2009, you wrote:
>>
>> Well, I just emailed them to see if I could talk them into offering older
>> versions at a discount - $470 is really steep.
>>
>
>Perhaps we could ask about a group buy once they've implemented microtuning.
>Or ask it now and speed up their urge to implement microtuning :)
>
>-Marcel
>

Indeed! If we could get 10 people to buy it at $300 or something.
I was given a free copy, but I'll even chip in on something like
that.

-Carl

🔗Chris Vaisvil <chrisvaisvil@...>

9/16/2009 4:27:13 PM

If I am employed at the time I could see with parting with $300. If not I'll
have to pass.

Using other high priced sample sets is not a very good argument.... A number
of freelance developers are offering pricing dependent on end use for
software and samples. and NI has released ver 3 for free as well as Kore for
free. Reaper comes to mind of a really good DAW for $50. Music is an
expensive hobby yes, but looking at the choice between a $500 sample set or
getting a piece of hardware.... well I think I get the hardware.

And you may not like GPO - but I bet you like it much more than the
soundfonts I used to use :-)

Chris

On Wed, Sep 16, 2009 at 5:45 PM, Carl Lumma <carl@...> wrote:

>
>
> At 01:13 PM 9/16/2009, you wrote:
> >>
> >> Well, I just emailed them to see if I could talk them into offering
> older
> >> versions at a discount - $470 is really steep.
> >>
> >
> >Perhaps we could ask about a group buy once they've implemented
> microtuning.
> >Or ask it now and speed up their urge to implement microtuning :)
> >
> >-Marcel
> >
>
> Indeed! If we could get 10 people to buy it at $300 or something.
> I was given a free copy, but I'll even chip in on something like
> that.
>
> -Carl
>
>
>

[Non-text portions of this message have been removed]

🔗Daniel Forro <dan.for@...>

9/16/2009 8:33:08 PM

I've read about Synful right now from some of your previous messages. Visited pages, have read everything, listened to all demos...

My rough opinion:

- no doubt about the developping team and its chief, he has great background

- basic idea is not bad (something similar is used by producers of synthesizers like Yamaha, Korg and Roland, even better - it works in real time for live play), but:

... total sound is still not so convincing

... I'm not quite sure if those strange unnaturally cutted note endings or fast decrescendos (like release in amplitude envelope with non linear shape) and unrealistic ambient or reverb or what is it is a mistake of demo song programmers or some issue in the software

... when simulating real orchestra it only emphasizes all those terrible kitsch cliches, all that Hollywoodish late romantic schlock balast which I hate on orchestra (and which I will strictly prohibit if I'm orchestra chief or conductor - all those vibratos, portamentos between notes, exaggerated crescendos/decrescendos...). Maybe some of them are just result of arranging schmaltz from the side of programmer, but I'm afraid lot of them are inserted automatically (because they are part of the original sample phrases database used in the program) and can't be cancelled.

... to emphasize in the promotion (between the lines, in the background, not directly) the easiness how even the unexperienced user will get great results is only a trick how to make people to buy it. As always with any software here is valid: garbage in, garbage out. It's only a tool. To me it's enough just to hear unable users of professional keyboards with style accompaniment. Now we will hear pseudo-symphonic works done by uneducated anybody with 470 USD in the pocket. Is this really good, Mr. Lindemann, to sell such powerfull weapons?

... from this point of view it wouldn't be bad to have different sample phrase databases for different music styles, which could be reflected not only in types of musical phrases, articulation and expression, but as well in the emulating various instrument selection. What would be great - to have some Gothic music ensembles, then Renaissance orchestra, Early Baroque, Bach, Early Classicism, Beethoven, Early Romantism - Weber/Schumann/Berlioz..., Top Romantism - Liszt/Wagner..., Classical Romantism - Dvorak/Brahms/Tchaikovski..., Late Romantism - Skriabin/Mahler/Strauss/Rachmaninoff/Sibelius... (that one we probably have :-) ), Impressionism - Debussy/Ravel, Folklorism - Stravinski/Bartok/Janacek..., and so on, through Expressionism, 50ies Multiserialism, 60ies Timbre Music/Aleatorics until contemporary orchestra... Or even some ethnic ensembles from all the world...
Mainly because something like an universal orchestra doesn't exist in the reality...

... I see a huge potential in that experimental direction showed in few really beautiful demo songs done by software author himself. (It looks this is what HE also likes with all his background!) I mean concerning the sound experiments with strange alienating (German has nice word "Verfremdung") and warping by physical modelling. I personally don't need common orchestra imitation offered by this software, why? But to have a chance to experiment electronically with the sound would be great. Of course we can do this even with acoustic samples, but physical modelling is quite different beast.

- microtonal support must be there

- price is too high

- it can't be used in the real time as a musical instrument

- If I understand well the principle it starts to use its full emulating engine AFTER the recording the track into the host sequencer. That means we still don't hear the final result during the recording. That's not good. Especially when the result is processed automatically and user can't influence it...

Just few points...

Daniel Forro

On 17 Sep 2009, at 6:43 AM, Carl Lumma wrote:

>
> At 01:07 PM 9/16/2009, you wrote:
> >Well, I just emailed them to see if I could talk them into offering
> >older versions at a discount - $470 is really steep.
>
> The developer is a former head of IRCAM -- one of the few
> academics to successfully cross over into commercial music
> software. Also, Synful is one of the only commercial products
> ever to debut a novel synthesis technique (which had not
> previously been demonstrated in the academic realm).
>
> $470 isn't unreasonable compared to the massive samplers out
> there. Kontakt costs $400, and most people buy expensive
> samples to go with it. These products also have greater
> hardware requirements.
>
> Granted, GPO looks like a winner at $150 given that it already
> supports microtuning. But, let's just say I'll refrain from
> commenting on how I think it sounds.
>
> Rather than ask Eric to lower the price, I think a good approach
> would be to organize a group buy, contingent on .scl support
> (at a minimum).
>
> -Carl
>

🔗Carl Lumma <carl@...>

9/16/2009 9:32:41 PM

Daniel wrote:

>- no doubt about the developping team and its chief, he has great
>background

As far as I know it is entirely Eric's work.

>- basic idea is not bad (something similar is used by producers of
>synthesizers like Yamaha, Korg and Roland, even better - it works in
>real time for live play), but:

I know of nothing similar by any of those companies.

>... total sound is still not so convincing

I don't know about convincing, but it sounds much better, to
me, than something like GPO, for reasons I will detail in
another message.

>... to emphasize in the promotion (between the lines, in the
>background, not directly) the easiness how even the unexperienced
>user will get great results is only a trick how to make people to buy
>it.

Not true. Just playing one-handed on a MIDI keyboard is enough
to make a solo violin *far* more realistic than any sampler,
unless one spends many hours of work choosing phrases out of
some 50 gig Vienna library (e.g. non-realtime vs. realtime).

>Is this really good, Mr. Lindemann, to sell such powerfull
>weapons?

Oh please.

>... from this point of view it wouldn't be bad to have different
>sample phrase databases for different music styles,

He does have different databases (a jazz band I think), or has
at least talked of developing them.

>- it can't be used in the real time as a musical instrument

??

>- If I understand well the principle it starts to use its full
>emulating engine AFTER the recording the track into the host
>sequencer. That means we still don't hear the final result during the
>recording. That's not good. Especially when the result is processed
>automatically and user can't influence it...

You misunderstand. It has lower latency on a laptop than
large samplers.

-Carl

🔗Carl Lumma <carl@...>

9/16/2009 10:00:20 PM

>And you may not like GPO - but I bet you like it much more than the
>soundfonts I used to use :-)

Not necessarily! There's something called Carlos' first law.
Or second law - or maybe it's the third. It states: every variable
you can control, you must control. A compressed soundfont with
four samples stretched across the keyboard may sound like crap,
but there isn't much to control. It will sound uniformly like
crap. GPO will give much higher fidelity, and as the logical
extreme of the sampling idea, it will sound more *superficially*
realistic.

It's this superficial realism that bothers my ear. Any quarter-
second excerpt of a GPO performance will be indistinguishable from
a real recording. But listen for longer, and you will start to
notice all the degrees of freedom that aren't being controlled.
You can't control them -- GPO doesn't give you the knobs.

Because Synful is a synthesizer, it can create novel performances
based on MIDI data. It's the same thing with Pianoteq vs.
something like Ivory (but even moreso, since orchestras have
many more degrees of freedom than pianos). Though Ivory's a
sampler, it actually does have DSP modeling of string resonance,
and it can interpolate 128 levels of dynamics between the 3-5
different dynamic samples for each key. But it still can't
match the overtone responses and so on, that come out of the
Pianoteq physical model, even though if you listen closely to
individual notes (especially in the treble), Ivory will sound
more "realistic" (the treble notes on all physically modeled
pianos I've heard buzz as they decay... I'm not sure why...).

An entire Ivory performance sounds like a bad piano recording,
compressed too much or something. A Pianoteq performance
breathes more (you can download several such comparisons from
the pianoteq forums). But the difference is subtle. It's
much more extreme with Synful vs. GPO, both in the superficial
and underlying realism departments.

To address Daniel's other point, yes, some of the demos on the
synful website are rubbish. Many of them were done on earlier
versions -- the additive engine was completely rewritten in
version 2.4. Also, I don't personally like the "sections"
patches (then again, I don't like real orchestra sections
either, being a fan of chamber music). It is possible to put
together a synful orchestra using one individual instrument for
each part, which I think gives much better results. If you
listen to the Beethoven string quartet example, you'll hopefully
see what I mean.

In summary, though I may even perversely prefer a homebrew
soundfont to GPO, I might prefer even more a simple orchestra of
triangle waves. That's how much superficial realism I'm willing
to give up to get underlying realism (musical expression).
I realize not everyone is willing to go that far, and I promise
not to criticize them for the next 6 months, starting... now.

-Carl

🔗Mike Battaglia <battaglia01@...>

9/16/2009 11:37:15 PM

> >Is this really good, Mr. Lindemann, to sell such powerfull
> >weapons?
>
> Oh please.

God forbid such heresy should take place. Let's not even bring up what
would happen if the jazz majors were to get a hold of this. Or these
so called "hip", "hop"pers.

On another note, how would you rate Synful as compared to EastWest or
VSL? Both of those were always a step up from GPO anyway IMO. Although
with GPO4 I just may be making the switch. (I sure hope you've heard
EWQL recordings other than my half-finished adaptations of
Chromosounds. Yikes.)

It seems like Synful's strings sound a bit less real than EWQL's, but
at the benefit of having a more well-rounded string sound overall...
EWQL's strings involve switching patches to different related string
sounds which excel at different parts of the string experience
(sustain no vibrato, sustain vibrato, legato melody, etc). I've never
really gotten the hang of the switching. Of course in the hands of
people who are well versed with it it sounds amazing.

Now, if you'll excuse me, I'm going to listen exclusively to loops of
common practice music tuned to quarter comma meantone.

-Mike

🔗jonszanto <jszanto@...>

9/16/2009 11:54:11 PM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote, right at the end of his post:
>
> I realize not everyone is willing to go that far, and I promise
> not to criticize them for the next 6 months, starting... now.

I agreed with almost every point in this post, and quite enjoyed your sense of humor, which I either usually miss or it doesn't show up all that often.

Mocking up an orchestra, virtually - not an easy task at all. In line with your last sentence: since I spend roughly 42 weeks of my year in the middle of (well, actually, near the rear-right of) a real-time, live orchestra, I've decided that I won't have any input on this subject at all, starting... now.

Cheers,
Jon

🔗Daniel Forro <dan.for@...>

9/17/2009 6:51:03 AM

On 17 Sep 2009, at 1:32 PM, Carl Lumma wrote:

> As far as I know it is entirely Eric's work.
>

Great.
>
> >- basic idea is not bad (something similar is used by producers of
> >synthesizers like Yamaha, Korg and Roland, even better - it works in
> >real time for live play), but:
>
> I know of nothing similar by any of those companies.
>

Yamaha - SA2 (Super Articulated voices) with AEM technology (Articulation Element Modeling)
KORG - RX (Real eXperience) with DNS technology (Defined Nuance Control)
Roland - Supernatural Technology in ARX Expansion Boards

Some use sample switching (which often depends on velocity, some other controller or interval between two consecutive notes), some use physical modeling.
>
> >... total sound is still not so convincing
>
> I don't know about convincing, but it sounds much better, to
> me, than something like GPO, for reasons I will detail in
> another message.
>

It should sound better, physical modeling is superior to sample playing. But here are still some reserves.

> Not true. Just playing one-handed on a MIDI keyboard is enough
> to make a solo violin *far* more realistic than any sampler,
> unless one spends many hours of work choosing phrases out of
> some 50 gig Vienna library (e.g. non-realtime vs. realtime).
>

I don't understand what you mean here. MIDI keyboard is MIDI keyboard, it has no tone generator. You exaggerate with one hand play, such violin emulation will be poor. To make violin sound realistic you need both hands - one playing MIDI keyboard with velocity and aftertouch, another one on controllers, and I would add breath controller and some pedals maybe PLUS violin mathematical model in tone generator PLUS deep knowledge and experience how to play violin (nothing to say about knowledge and experience how to play keyboard). Yes, I'm talking about Yamaha VL1. But still we are far from real violin, bandwidth is not enough just to emulate slow monophonic melody on violin with all that acoustic data which real instrument generates.
Anyway all this emulation of acoustic instruments by electronic means is ridiculous. Electronic instruments should produce innovative electronic sounds.
> He does have different databases (a jazz band I think), or has
> at least talked of developing them.
>
That's good news.
> >- it can't be used in the real time as a musical instrument
>
> ??
>
What's unclear here?

> >- If I understand well the principle it starts to use its full
> >emulating engine AFTER the recording the track into the host
> >sequencer. That means we still don't hear the final result during the
> >recording. That's not good. Especially when the result is processed
> >automatically and user can't influence it...
>
> You misunderstand. It has lower latency on a laptop than
> large samplers.
>
> -Carl
>

-------------
MIDI Look Ahead
Realistic note transitions begin before a new note starts. When a wind player plays a slur, the timbre and intensity of the current note begins to change well before the pitch changes to the new note. This anticipation of the upcoming note is a key aspect of expressive playing. Performers can do this because they know ahead of time when they are going to play a new note and they prepare for it with changes in embouchure and breath pressure.

To model this kind of expression a synthesizer needs to have the same knowledge of the future. Synful Orchestra handles this with two modes of operation. When playing live from a keyboard Synful Orchestra has no knowledge of when a new note is coming, and so it does its best to react as expressively as possible with low latency when a new note occurs. However, when playing from a sequence Synful Orchestra can optionally add a delay to the Midi input. This delay allows Synful Orchestra to look ahead in the Midi sequence, recognize when a new note is coming, and anticipate it with appropriate changes to the timbre and intensity of the current note. The result is more realistic and natural phrasing.

--------------

What they write here, means there is still some difference between real time performing and playing recorded tracks when Delay for Expression is activated. I didn't work with it unlike you, so you know better. If I'm wrong you can correct me.

Daniel Forro

🔗jonszanto <jszanto@...>

9/17/2009 10:28:55 AM

--- In MakeMicroMusic@yahoogroups.com, Daniel Forro <dan.for@...> wrote:
> Anyway all this emulation of acoustic instruments by electronic means
> is ridiculous. Electronic instruments should produce innovative
> electronic sounds.

I think it is a great relief for everyone to find out there is only one way to do things, and that anything else is a waste of time.

Cheers,
Jon

🔗Daniel Forro <dan.for@...>

9/17/2009 6:20:42 PM

There are always many ways how to do things and everybody can find his/her best one. I have just written my opinion after many years of work with both acoustic and electronic instruments. Both groups have their strong and weak points, and some attitudes or principles can be transferred from one group to the other. For me the best way is to combine them to cover weak points.

Of course in my daily work I use electronic instruments also for making acoustic sounds, and acoustic instruments to produce electronic sounds, why not. I found the most interesting field syncoustic sounds based on the physical modeling by combining different drivers and resonators.

Daniel Forro

On 18 Sep 2009, at 2:28 AM, jonszanto wrote:

>
> --- In MakeMicroMusic@yahoogroups.com, Daniel Forro <dan.for@...> > wrote:
> > Anyway all this emulation of acoustic instruments by electronic > means
> > is ridiculous. Electronic instruments should produce innovative
> > electronic sounds.
>
> I think it is a great relief for everyone to find out there is only > one way to do things, and that anything else is a waste of time.
>
> Cheers,
> Jon
>

🔗jonszanto <jszanto@...>

9/17/2009 6:57:29 PM

--- In MakeMicroMusic@yahoogroups.com, Daniel Forro <dan.for@...> wrote:
> There are always many ways how to do things and everybody can find
> his/her best one. I have just written my opinion after many years of
> work with both acoustic and electronic instruments.

Thanks - your initial statement seemed a much more narrow view of the possibilities inherent in our toolsets, and it doesn't sound like the case.

Cheers,
Jon

🔗Rick McGowan <rick@...>

9/18/2009 10:10:09 AM

--- In MakeMicroMusic@yahoogroups.com, Daniel Forro <dan.for@...> wrote:
> > Anyway all this emulation of acoustic instruments by electronic means > > is ridiculous. Electronic instruments should produce innovative > > electronic sounds.
> Hmmm. Right.

For me the main reason for using electronic means of orchestral production is to hear a mock-up of what I'm writing in some semblance of what it might sound like if played by a real orchestra. On the planet where I live, mere mortals can't just have a full orchestra at their beck and call to perfectly perform their work at any time of day or night. Especially while it's still work-in-progress.

"Innovative electronic sounds" have their place, of course, but I think there's room in the world for more than one opinion.

Rick

🔗Rick McGowan <rick@...>

9/18/2009 10:48:34 AM

Ah, yes, for what it's worth, I was sort of puzzled myself. :-)
Rick

jonszanto wrote:
> --- In MakeMicroMusic@yahoogroups.com, Daniel Forro <dan.for@...> wrote:
> >> There are always many ways how to do things and everybody can find >> his/her best one. I have just written my opinion after many years of >> work with both acoustic and electronic instruments.
>> >
> Thanks - your initial statement seemed a much more narrow view of the possibilities inherent in our toolsets, and it doesn't sound like the case.
>
> Cheers,
> Jon
>

🔗Carl Lumma <carl@...>

9/18/2009 3:25:46 PM

Daniel wrote:

>> >- basic idea is not bad (something similar is used by producers of
>> >synthesizers like Yamaha, Korg and Roland, even better - it works in
>> >real time for live play), but:
>>
>> I know of nothing similar by any of those companies.
>
>Yamaha - SA2 (Super Articulated voices) with AEM technology
>(Articulation Element Modeling)
>KORG - RX (Real eXperience) with DNS technology (Defined Nuance Control)
>Roland - Supernatural Technology in ARX Expansion Boards

As is typical for the big synth makers, none of these are
described in anything but the vaguest terms, so it's impossible
to know how they really work. But it's quite clear none of
them are additive synthesis like Synful.

>Some use sample switching (which often depends on velocity, some
>other controller or interval between two consecutive notes),

Sample switching is a perfectly standard sampling technique.
Detecting legato playing from inter-note timing and velocity
and so on is more advanced than usual. But Synful matches
complete phrases on many different attributes.

>It should sound better, physical modeling is superior to sample
>playing. But here are still some reserves.

It's additive synthesis, not physical modeling.

>> Not true. Just playing one-handed on a MIDI keyboard is enough
>> to make a solo violin *far* more realistic than any sampler,
>> unless one spends many hours of work choosing phrases out of
>> some 50 gig Vienna library (e.g. non-realtime vs. realtime).
>>
>
>I don't understand what you mean here. MIDI keyboard is MIDI
>keyboard, it has no tone generator.

I mean, with Synful.

>You exaggerate with one hand
>play, such violin emulation will be poor. To make violin sound
>realistic you need both hands - one playing MIDI keyboard with
>velocity and aftertouch, another one on controllers, and I would add
>breath controller and some pedals maybe PLUS violin mathematical
>model in tone generator PLUS deep knowledge and experience how to
>play violin (nothing to say about knowledge and experience how to
>play keyboard).

Obviously you haven't tried Synful. It does help to use the
mod wheel, but it is not necessary to get a realtime result
superior to samplers.

>Yes, I'm talking about Yamaha VL1.

That's physical modeling synthesis. Very different. Its
winds were great but its strings were not.

>Anyway all this emulation of acoustic instruments by electronic means
>is ridiculous. Electronic instruments should produce innovative
>electronic sounds.

What matters is expressivity. Humans are adapted to express
and perceive expression through physical systems, with linear
responses. Acoustic instruments are such systems, and they have
furhermore evolved over centuries to be particularly suited for
musical expression. Modeling them with a physical model, or
resynthesizing them with additive synthesis, is a natural place
to start.

Furthermore, it is a skill to be able to appreciate a particular
instrument. The ear must learn to dissect the timbre. I did
not appreciate electric guitar until I was in my teens, but now
I love it. It's one reason that instrumentalists tend to
overwhelmingly listen to music featuring their instrument, when
they listen to music for enjoyment -- their ears are expert at
hearing it. Well, our audiences have lifetimes of learning
existing instruments under their belts. Why throw that away?

In principle, I agree that the real acoustic instrument would
always be preferred. But for reasons of expense, convenience,
and especially in microtonalism, intonation accuracy/flexibility,
it makes sense to turn to computers.

>> >- it can't be used in the real time as a musical instrument
//
>What they write here, means there is still some difference between
>real time performing and playing recorded tracks when Delay for
>Expression is activated. I didn't work with it unlike you, so you
>know better. If I'm wrong you can correct me.

It does have two modes, but the difference in realism is not
large. The realtime mode is still quite good, and requires less
in the way of computing hardware to respond with low latency than
a gigabyte sampler.

-Carl

🔗Daniel Forro <dan.for@...>

9/18/2009 6:14:40 PM

On 19 Sep 2009, at 7:25 AM, Carl Lumma wrote:

>
> Daniel wrote:
>
> >Yamaha - SA2 (Super Articulated voices) with AEM technology
> >(Articulation Element Modeling)
> >KORG - RX (Real eXperience) with DNS technology (Defined Nuance > Control)
> >Roland - Supernatural Technology in ARX Expansion Boards
>
> As is typical for the big synth makers, none of these are
> described in anything but the vaguest terms, so it's impossible
> to know how they really work. But it's quite clear none of
> them are additive synthesis like Synful.
>

Yes, sometimes those marketing terms are funny, and foggy. All of those methods work only with ROM samples.
> Sample switching is a perfectly standard sampling technique.
> Detecting legato playing from inter-note timing and velocity
> and so on is more advanced than usual. But Synful matches
> complete phrases on many different attributes.
>
That's excellent and I'm quite sure it will be better and better in the future.

> >It should sound better, physical modeling is superior to sample
> >playing. But here are still some reserves.
>
> It's additive synthesis, not physical modeling.
>
Oh yes, my mistake... But if additive resynthesis is done on the base of previous sound analysis, it can be considered a kind of modeling.
> Obviously you haven't tried Synful. It does help to use the
> mod wheel, but it is not necessary to get a realtime result
> superior to samplers.
>
In comparison with samplers it must be better, I have meant physical modeling.
> That's physical modeling synthesis. Very different. Its
> winds were great but its strings were not.
>
Bowed instruments are really difficult to emulate, that bow pressure and bowing speed... When the parameters of physical model are assigned to real time controllers, and with skilled performer it's usable.
Plucked instruments are more easy to emulate.
> What matters is expressivity. Humans are adapted to express
> and perceive expression through physical systems, with linear
> responses. Acoustic instruments are such systems, and they have
> furhermore evolved over centuries to be particularly suited for
> musical expression. Modeling them with a physical model, or
> resynthesizing them with additive synthesis, is a natural place
> to start.
>
> Furthermore, it is a skill to be able to appreciate a particular
> instrument. The ear must learn to dissect the timbre. I did
> not appreciate electric guitar until I was in my teens, but now
> I love it. It's one reason that instrumentalists tend to
> overwhelmingly listen to music featuring their instrument, when
> they listen to music for enjoyment -- their ears are expert at
> hearing it. Well, our audiences have lifetimes of learning
> existing instruments under their belts. Why throw that away?
>
> In principle, I agree that the real acoustic instrument would
> always be preferred. But for reasons of expense, convenience,
> and especially in microtonalism, intonation accuracy/flexibility,
> it makes sense to turn to computers.
>
Everything here very well said, I agree.
I have the same experience as you with electric guitar sound :-)
(And with some music styles, too...)
> It does have two modes, but the difference in realism is not
> large. The realtime mode is still quite good, and requires less
> in the way of computing hardware to respond with low latency than
> a gigabyte sampler.
>
> -Carl
>
Thanks for your explanation. It's a temptation, but I can't afford it. There's a list of the other hardware and software I'd like to have, more essential for my work (my youngest computer has 7 years, or I have Sibelius 3, Absynth 2, I use OS9 for sequencing etc. etc.).

Daniel Forro

🔗Daniel Forro <dan.for@...>

9/18/2009 9:07:23 PM

I didn't say there's no room for other opinions, and didn't force anybody to agree with me. I just wrote my opinion, and logical conclusion based on these facts:

- Electronic instruments are not perfect in emulation of acoustic ones. Yes, they can be used this way, but if we use them this way, result can't be perfect, it's only substitution and fake. And there are some people who don't like compromises of this kind.

- Electronic instruments are perfect in making electronic sounds. Using them this way is perfect. Let's do it!

Is there something wrong with such considerations? I use both approaches, too.

Concerning real orchestra, of course there's a rare opportunity for most of us to work with it, and because orchestra as an institution has its rules, there's no room for experimenting or changing things later. Composer must give them finished score and parts and use effectively those few hours of rehearsal before concert. Result is always compromise. Again we can learn from it and come to conclusions:

- to write orchestral music, and use such helping aids like fake electronic orchestra during composition work to hear results, experiment and find the best solution for given case

- to use fake orchestra intentionally and exclusively and be happy with it

- to give up and work with electronic sounds only

- or something in between

- or to write only piano music :-)

Daniel Forro

On 19 Sep 2009, at 2:10 AM, Rick McGowan wrote:

>
> --- In MakeMicroMusic@yahoogroups.com, Daniel Forro <dan.for@...> > wrote:
> > > Anyway all this emulation of acoustic instruments by electronic > means
> > > is ridiculous. Electronic instruments should produce innovative
> > > electronic sounds.
> >
>
> Hmmm. Right.
>
> For me the main reason for using electronic means of orchestral
> production is to hear a mock-up of what I'm writing in some > semblance of
> what it might sound like if played by a real orchestra. On the planet
> where I live, mere mortals can't just have a full orchestra at their
> beck and call to perfectly perform their work at any time of day or
> night. Especially while it's still work-in-progress.
>
> "Innovative electronic sounds" have their place, of course, but I > think
> there's room in the world for more than one opinion.
>
> Rick
>

🔗aum <aum@...>

9/19/2009 3:44:53 AM

Daniel Forro wrote:
>>> It should sound better, physical modeling is superior to sample
>>> playing. But here are still some reserves.
>>> >> It's additive synthesis, not physical modeling.
>>
>> > Oh yes, my mistake... But if additive resynthesis is done on the base > of previous sound analysis, it can be considered a kind of modeling.
> Can be other type of synthesis based on the previous sound analysis considered a kind of modeling too?
Milan

🔗aum <aum@...>

9/19/2009 4:00:50 AM

Daniel Forro wrote:
> I didn't say there's no room for other opinions, and didn't force > anybody to agree with me. I just wrote my opinion, and logical > conclusion based on these facts:
>
> - Electronic instruments are not perfect in emulation of acoustic > ones. Yes, they can be used this way, but if we use them this way, > result can't be perfect, it's only substitution and fake. And there > are some people who don't like compromises of this kind.
> I think this emulation has two parts - sound and expressiveness. Sound is generally no problem, at least the recorded one. In the recording you can't tell the difference between the original and sampled sound. They might be exactly the same in fact. The expressiveness depends on many factors, in some situations sampling, physical modelling or other kind of synthesis can produce the sound indistinguishable from the one of an acoustic instrument.

Is, for example, the electronic calculator only a substitution and fake of a mechanical one?
> - Electronic instruments are perfect in making electronic sounds. > Using them this way is perfect. Let's do it!
> What is the "electronic sound"?

Milan

🔗Daniel Forro <dan.for@...>

9/19/2009 7:34:03 AM

On 19 Sep 2009, at 8:00 PM, aum wrote:
> I think this emulation has two parts - sound and expressiveness. Sound
> is generally no problem, at least the recorded one. In the > recording you
> can't tell the difference between the original and sampled sound. They
> might be exactly the same in fact.
>
Might be, of course, because sample IS a record. But isolated record of one note is still very far from live expressive music. Problem of samplers is they connect isolated records of individual notes mechanically without any musical (and expressive) connection between them. On the acoustic instruments also space between two adjacent notes - transition - is very important. That's the reason for some new approaches to sampling (like those new Yamaha, Roland and Korg methods). Mainly velocity, aftertouch and buttons switching different samples in real time during performance can improve this.
This problem can be perfectly solved by sampling whole musical phrases (patterns, loops), thus we keep expressivity, but lose control of individual notes, which is one of the most positive moments of electronic instruments under MIDI control.
From this point of view Synful approach looks very promising. They recorded whole phrases, but don't play them as samples, but after resynthesizing them by additive synthesis. In between they can be processed as necessary. Still I don't understand well how all that analysis of musical situation, selection of appropriate phrases from database, converting to additive resynthesis and processed as necessary can run in real time...

[Similar problems of transitions exist in statistical analysis of music done by musicologists, for example melodic intervals, to find some typical patterns or characteristic melodic cells for certain music era, genre, composer... which can be later used in composition to emulate this era, genre, composer... When you analyse only two adjacent intervals, similarity is very far. The more members in the analyzed group of adjacent notes, similarity grows, but very soon you come to identity = copy, which was not target. ]
> The expressiveness depends on many
> factors, in some situations sampling, physical modelling or other kind
> of synthesis can produce the sound indistinguishable from the one > of an
> acoustic instrument.
>
I don't understand this sentence. In the beginning you use term "expressiveness", in the end "sound". Yes, sound can be very near, almost indistinguishable (with excellent instrument and excellent performer), but still it will differ in expressiveness. Whenever I have compared here in this discussion acoustic and electronic instruments, of course I've been talking about live performance!

On acoustic instruments sound can't be separated from expressiveness as both elements are directly connected. Sound is result of real time control (= expressiveness).
Problem of the most electronic instruments is they did this separation (because of technological of commercial limits), and lost expressiveness. If we accept it, we can live with it, we can work with it, made advantage and merit from it and call it for example "electronic sound" :-) There's nothing bad on it. Aesthetics of huge amount of art and commercial electronic music is based on this. Intentional coolness, sterility, robotically precise rhythm, rhythmic quantization, lack of expressivity... Futuristic picture of the dehumanized world...

Yes, I can agree that some electronic instruments can emulate well most of acoustic instruments, if we use this main factor: MIDI sequence them and then add additionally lot of controllers. Some of them need only few and can be well emulated in real time (drums, plucked instruments, industrial sound FX...), some others need more.
But despite this fact, and despite the fact I use also such emulation from practical reasons, I will keep my opinion that the main field for electronic instruments is to produce electronic sounds. In the history of electronic music instruments it was always so. In my opinion direction of emulating acoustic instruments by sampling, RAM or ROM, and sample based synthesizers, has nothing to do with real electronic synthesis of sound. It's just additional feature of this technology, which has of course its use, too.
Physical modelling is quite different story, here is still hidden a big future potential. Especially in syncoustic sounds - electronically produced sounds with character and expressivity of acoustic instruments.
> Is, for example, the electronic calculator only a substitution and > fake
> of a mechanical one?
>
Can't be compared. Resulting numbers can be the same from both types of calculators (here in Japan, pioneer country of digital technology, still lot of people use 400 years old abacus called soroban, and skilled person can count with it faster then another one with calculator), resulting sound will always differ.
> > - Electronic instruments are perfect in making electronic sounds.
> > Using them this way is perfect. Let's do it!
> >
> What is the "electronic sound"?
>
> Milan
If you are the person I suppose then it's funny you ask me same question as my three years old daughter :-) You are specialist in this field so you know the answer. OK, here you have mine definition:

All sounds which can't be produced directly by acoustic instruments. But we can get electronic sound from these instruments by electronic processing.

All sounds which are produced by electronic synthesis (analog, digital - additive, subtractive, multiplicative...). Typical electronic sound is sine wave (therefore it was selected as main sound for art electronic music in 50ies), but also the other electronically produced simple waveforms can't be made exactly with acoustic instruments. Good examples of electronic sounds are also certain inharmonic spectra (different then bell spectra), klangs, mixtures, some noises or modulated noises, or result of simple or multiple modulations (FM, AM, ring modulation, PWM, PM...).
Maybe after another 10, 50 or 100 years it will be possible to get acoustic sound from these instruments. But in my opinion this is not so important. I agree with you that sound is no problem. Problem is expressivity. When we will be able to have under direct (and simple) control in real time performance all aspects and elements of sound, then there will be no difference between acoustic and electronic instruments. This should be target. And in my opinion answer is physical modeling.

Concerning this question (which is maybe more philosophical then technological) we can also come to the conclusion, that electronic sound in fact doesn't exist because all sounds are from physical point of view changes of air pressure. Including all those which are produced by speaker - basically speaker is electronically controlled drum :-) At least this is opinion of my daughter.

Daniel Forro

🔗Daniel Forro <dan.for@...>

9/19/2009 7:38:03 AM

In rough and general sense yes, but we use here rather terms like "synthesizing", "setting", "patching", "programming"...

Classical resynthesis is connected with additive harmonic synthesis as far as I know.

Daniel Forro

On 19 Sep 2009, at 7:44 PM, aum wrote:
> >>
> > Oh yes, my mistake... But if additive resynthesis is done on the > base
> > of previous sound analysis, it can be considered a kind of modeling.
> >
> Can be other type of synthesis based on the previous sound analysis
> considered a kind of modeling too?
> Milan
>

🔗touchedchuckk <BadMuthaHubbard@...>

9/19/2009 8:59:40 AM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Like I said, I would think it prudent to ignore it for now, and
> rather to build Csound into the package, and pre-roll several basic
> synth patches, and expose them in the interface as presets.

I think I will do just that, thanks.
-Chuckk

🔗touchedchuckk <BadMuthaHubbard@...>

9/19/2009 9:16:50 AM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>

> My point is, if you want to make good electronic music software that
> has a chance of becoming widely used by musicians (and that happens
> to be capable of microtuning), you had better hide anything to do
> with Csound under a blanket. If you want to make something for
> electronic music geeks, you're on the right track. I say more power
> to you either way.

That's not such a black-and-white question for most unpaid developers. I really wrote it in order to use it myself, but even including soundfont support only happened because I imagined others wanting to use it. How much unpaid work am I willing to do for imagined non-electronic music geeks who may or may not use my software no matter what I do? More than none, less than infinity. It was my intention from the beginning to wrap everything necessary in one executable- when the possibility arises- but clicking through the Csound installer's pages is pretty mild compared to what some free software developers make their users do. I am still waiting on some fixes from the Csound developers, and then the single-file executable promised in the Rationale README will be top of my list.

BTW, speaking of libraries, you mentioned QT before; I avoided QT because it can only be used in proprietary software by paying for a special license, and I'm not completely decided against writing proprietary software someday. Next GUI project I do, though, will use WxWidgets, which is far more dependable and cross-platform than what I used, TkInter.

-Chuckk

🔗aum <aum@...>

9/19/2009 12:47:09 PM

Daniel Forro wrote:
> In rough and general sense yes, but we use here rather terms like > "synthesizing", "setting", "patching", "programming"...
>
> Classical resynthesis is connected with additive harmonic synthesis > as far as I know.
> You are right, the point of my question was to show that the meaning of these terms is a matter of definition. In your previous replies I assumed your word "modeling" mean physical modelling (in Yamaha VL, etc. sense) not additive harmonic synthesis.

>> I think this emulation has two parts - sound and expressiveness. Sound
>> is generally no problem, at least the recorded one. In the >> recording you
>> can't tell the difference between the original and sampled sound. They
>> might be exactly the same in fact.
>>
>> > Might be, of course, because sample IS a record. But isolated record > of one note is still very far from live expressive music. Problem of > samplers is they connect isolated records of individual notes > mechanically without any musical (and expressive) connection between > them. On the acoustic instruments also space between two adjacent > notes - transition - is very important. That's the reason for some > new approaches to sampling (like those new Yamaha, Roland and Korg > methods). Mainly velocity, aftertouch and buttons switching different > samples in real time during performance can improve this.
> This problem can be perfectly solved by sampling whole musical > phrases (patterns, loops), thus we keep expressivity, but lose > control of individual notes, which is one of the most positive > moments of electronic instruments under MIDI control.
> From this point of view Synful approach looks very promising. They > recorded whole phrases, but don't play them as samples, but after > resynthesizing them by additive synthesis. In between they can be > processed as necessary. Still I don't understand well how all that > analysis of musical situation, selection of appropriate phrases from > database, converting to additive resynthesis and processed as > necessary can run in real time...
>
> [Similar problems of transitions exist in statistical analysis of > music done by musicologists, for example melodic intervals, to find > some typical patterns or characteristic melodic cells for certain > music era, genre, composer... which can be later used in composition > to emulate this era, genre, composer... When you analyse only two > adjacent intervals, similarity is very far. The more members in the > analyzed group of adjacent notes, similarity grows, but very soon you > come to identity = copy, which was not target. ]
> What I mean is that I disagree with your sentence "Electronic instruments are not perfect in emulation of acoustic ones." They can be if expressiveness is not needed.
>> The expressiveness depends on many
>> factors, in some situations sampling, physical modelling or other kind
>> of synthesis can produce the sound indistinguishable from the one >> of an
>> acoustic instrument.
>>
>> > I don't understand this sentence. In the beginning you use term > "expressiveness", in the end "sound". Yes, sound can be very near, > almost indistinguishable (with excellent instrument and excellent > performer), but still it will differ in expressiveness. Whenever I > have compared here in this discussion acoustic and electronic > instruments, of course I've been talking about live performance!
> If you play an electronic instrument using various controllers and other means to control sound variations (expressiveness) you can get in some situations the varying sound (expressive sound, the sound with expressiveness, ...) indistinguishable from the "sound with expressiveness" of acoustic instrument. And the result can be intentional, desired and perfect, not only the substitution, fake or compromise.
> On acoustic instruments sound can't be separated from > expressiveness as both elements are directly connected. Sound is > result of real time control (= expressiveness).
> Problem of the most electronic instruments is they did this > separation (because of technological of commercial limits), and lost > expressiveness. If we accept it, we can live with it, we can work > with it, made advantage and merit from it and call it for example > "electronic sound" :-) There's nothing bad on it. Aesthetics of huge > amount of art and commercial electronic music is based on this. > Intentional coolness, sterility, robotically precise rhythm, rhythmic > quantization, lack of expressivity... Futuristic picture of the > dehumanized world...
> Yes, I can agree that some electronic instruments can emulate well > most of acoustic instruments, if we use this main factor: MIDI > sequence them and then add additionally lot of controllers. Some of > them need only few and can be well emulated in real time (drums, > plucked instruments, industrial sound FX...), some others need more.
> But despite this fact, and despite the fact I use also such emulation > from practical reasons, I will keep my opinion that the main field > for electronic instruments is to produce electronic sounds. In the > history of electronic music instruments it was always so. In my > opinion direction of emulating acoustic instruments by sampling, RAM > or ROM, and sample based synthesizers, has nothing to do with real > electronic synthesis of sound. It's just additional feature of this > technology, which has of course its use, too.
> Physical modelling is quite different story, here is still hidden a > big future potential. Especially in syncoustic sounds - > electronically produced sounds with character and expressivity of > acoustic instruments.
> Almost any sentence is worth a long discussion. I think is is little off topic here. Although it might be interesting (you probably know, for example, the Rockefeller Chapel Court Test of Hammond organ. Funny story... Referring to "the main field for electronic instruments is to produce electronic sounds. In the history of electronic music instruments it was always so.").
>> Is, for example, the electronic calculator only a substitution and >> fake
>> of a mechanical one?
>>
>> > Can't be compared. Resulting numbers can be the same from both types > of calculators (here in Japan, pioneer country of digital technology, > still lot of people use 400 years old abacus called soroban, and > skilled person can count with it faster then another one with > calculator), resulting sound will always differ.
> Soroban was exactly what I was thinking about... Resulting sounds can be (practically) the same and are the same in many pieces of todays music. I think me, you or nobody else can tell the real sound sources in most of contemporary recordings.
>>> - Electronic instruments are perfect in making electronic sounds.
>>> Using them this way is perfect. Let's do it!
>>> >> What is the "electronic sound"?
>>
>> Milan
>> > If you are the person I suppose then it's funny you ask me same > question as my three years old daughter :-) You are specialist in > this field so you know the answer. OK, here you have mine definition:
>
> All sounds which can't be produced directly by acoustic instruments. > But we can get electronic sound from these instruments by electronic > processing.
>
> All sounds which are produced by electronic synthesis (analog, > digital - additive, subtractive, multiplicative...). Typical > electronic sound is sine wave (therefore it was selected as main > sound for art electronic music in 50ies), but also the other > electronically produced simple waveforms can't be made exactly with > acoustic instruments. Good examples of electronic sounds are also > certain inharmonic spectra (different then bell spectra), klangs, > mixtures, some noises or modulated noises, or result of simple or > multiple modulations (FM, AM, ring modulation, PWM, PM...).
> Maybe after another 10, 50 or 100 years it will be possible to get > acoustic sound from these instruments. But in my opinion this is not > so important. I agree with you that sound is no problem. Problem is > expressivity. When we will be able to have under direct (and simple) > control in real time performance all aspects and elements of sound, > then there will be no difference between acoustic and electronic > instruments. This should be target. And in my opinion answer is > physical modeling.
>
> Concerning this question (which is maybe more philosophical then > technological) we can also come to the conclusion, that electronic > sound in fact doesn't exist because all sounds are from physical > point of view changes of air pressure. Including all those which are > produced by speaker - basically speaker is electronically controlled > drum :-) At least this is opinion of my daughter.
>
> Daniel Forro
> I think I am the person you suppose... The point was: The sentence"Electronic instruments are perfect in making electronic sounds." is sort of tautology (Green paint is perfect in painting green).

Once again, any your sentence is worth a long off topic discussion. Maybe we will meet personally sometime. It would be my pleasure to talk with you.
Thanks for your time and the long valuable answer.
Best
Milan Gustar

🔗Carl Lumma <carl@...>

9/19/2009 1:50:38 PM

>> Oh yes, my mistake... But if additive resynthesis is done on the base
>> of previous sound analysis, it can be considered a kind of modeling.
>>
>Can be other type of synthesis based on the previous sound analysis
>considered a kind of modeling too?
>Milan

I don't agree with the first statement. Anything which models the
spectrum directly isn't physical modeling. Physical modeling is
the modeling of a sound *source*, which produces a spectrum.
So all additive and wavetable methods are excluded.

-Carl

🔗Carl Lumma <carl@...>

9/19/2009 2:06:06 PM

Chuckk wrote:

>> Like I said, I would think it prudent to ignore it for now, and
>> rather to build Csound into the package, and pre-roll several basic
>> synth patches, and expose them in the interface as presets.
>
>I think I will do just that, thanks.

Woohoo!

>That's not such a black-and-white question for most unpaid developers.
>I really wrote it in order to use it myself, but even including
>soundfont support only happened because I imagined others wanting to
>use it. How much unpaid work am I willing to do for imagined
>non-electronic music geeks who may or may not use my software no
>matter what I do? More than none, less than infinity. It was my
>intention from the beginning to wrap everything necessary in one
>executable- when the possibility arises- but clicking through the
>Csound installer's pages is pretty mild compared to what some free
>software developers make their users do.

I don't mind clicking through an installer, as long as cygwin isn't
hiding inside. ;)

>I am still waiting on some
>fixes from the Csound developers, and then the single-file executable
>promised in the Rationale README will be top of my list.

Woohoo!

>BTW, speaking of libraries, you mentioned QT before; I avoided QT
>because it can only be used in proprietary software by paying for a
>special license, and I'm not completely decided against writing
>proprietary software someday. Next GUI project I do, though, will use
>WxWidgets, which is far more dependable and cross-platform than what I
>used, TkInter.

What sort of license-foo is that? WxWidgets are the de facto
standard for Python, but they're much less capable than QT.

http://qt.nokia.com/products/licensing

Looks like you could do LGPL for free. Is that not an acceptable
compromise?

-Carl

🔗Daniel Forro <dan.for@...>

9/19/2009 4:49:01 PM

I'm sure we all know well what's physical modeling.

If you read carefully you will see I didn't mention "physical modeling". Here I was talking generally about "modeling" in the sense of shaping, emulating, forming, simulating, mimicking, setting, programming.... the sound on the base of previous analysis. This analysis doesn't mean necessarily analysis from the point of construction of the musical instrument, but just from the point of its sound.

You have used this term in the same sense as me in the beginning of your second sentence before you have jumped to the "physical modeling" :-)

Daniel Forro

On 20 Sep 2009, at 5:50 AM, Carl Lumma wrote:

>
> >> Oh yes, my mistake... But if additive resynthesis is done on the > base
> >> of previous sound analysis, it can be considered a kind of > modeling.
> >>
> >Can be other type of synthesis based on the previous sound analysis
> >considered a kind of modeling too?
> >Milan
>
> I don't agree with the first statement. Anything which models the
> spectrum directly isn't physical modeling. Physical modeling is
> the modeling of a sound *source*, which produces a spectrum.
> So all additive and wavetable methods are excluded.
>
> -Carl
>

🔗Mike Battaglia <battaglia01@...>

9/20/2009 12:25:37 AM

"A single note in Synful Orchestra may be built from three or more
rapidly spliced RPM phrase fragments. Splicing ordinary PCM sampled
sounds in this way would create unacceptable warbles and clicks.
Synful Orchestra uses a patented form of additive synthesis in which
sounds are generated from combinations of pure sine waves and noise
elements. This gives Synful Orchestra the ability to rapidly stretch,
shift, and splice phrase fragments while preserving perfect phrase
continuity."

Sounds like what they call "additive synthesis" is what they've been
calling "SMS" in the literature, or at least how it was taught to me
at school (http://en.wikipedia.org/wiki/Spectral_modeling_synthesis).
Probably they've made some clever proprietary extension to that and
gotten it patented.

Then again, they also say this:

"RPM technology uses a database of recorded phrases of orchestral
instruments. These are not recordings of isolated notes but complete
musical passages that represent all kinds of articulation and
phrasing... Synful Orchestra uses a set of advanced algorithms to
search the RPM Phrase Database in real-time for fragments that can be
spliced together to form this phrase."

So is it even really SMS at all? Although they claim to have a much
more intelligent algorithm than EastWest's Qlegato and VSL's whatever
it's called for intelligent articulation placement, it still really
boils down to using fragments of samples at its core. How can they
claim to be using a sample database and at the same time claim to be
building the sound up from scratch with sines and noise? If they
instead store samples as a bunch of sine waves that make up the
sample, then they're effectively storing the FFT of the sample, which
is really the same thing as storing the sample itself. Maybe they
store each sample as a mixed a time-frequency representation, to make
it easier for them to pitch scale and splice different samples
together, but I'd hardly call that an "additive synthesizer" in the
same way that an organ is or something. IFFT != additive synthesizer.

Either way, if Synful sounds as good as it does from that technique,
I'd say that people should give up on waveguides for a while and just
work on SMS. It has to be way faster computationally than anything
waveguides are doing.

(Then again, there's always Pianoteq...)

-Mike

On Sat, Sep 19, 2009 at 7:49 PM, Daniel Forro <dan.for@...> wrote:
>
>
>
> I'm sure we all know well what's physical modeling.
>
> If you read carefully you will see I didn't mention "physical
> modeling". Here I was talking generally about "modeling" in the sense
> of shaping, emulating, forming, simulating, mimicking, setting,
> programming.... the sound on the base of previous analysis. This
> analysis doesn't mean necessarily analysis from the point of
> construction of the musical instrument, but just from the point of
> its sound.
>
> You have used this term in the same sense as me in the beginning of
> your second sentence before you have jumped to the "physical
> modeling" :-)
>
> Daniel Forro
>
> On 20 Sep 2009, at 5:50 AM, Carl Lumma wrote:
>
> >
> > >> Oh yes, my mistake... But if additive resynthesis is done on the
> > base
> > >> of previous sound analysis, it can be considered a kind of
> > modeling.
> > >>
> > >Can be other type of synthesis based on the previous sound analysis
> > >considered a kind of modeling too?
> > >Milan
> >
> > I don't agree with the first statement. Anything which models the
> > spectrum directly isn't physical modeling. Physical modeling is
> > the modeling of a sound *source*, which produces a spectrum.
> > So all additive and wavetable methods are excluded.
> >
> > -Carl
> >
>
>

🔗Carl Lumma <carl@...>

9/20/2009 1:00:37 AM

Mike wrote:

>"A single note in Synful Orchestra may be built from three or more
>rapidly spliced RPM phrase fragments. Splicing ordinary PCM sampled
>sounds in this way would create unacceptable warbles and clicks.
>Synful Orchestra uses a patented form of additive synthesis in which
>sounds are generated from combinations of pure sine waves and noise
>elements. This gives Synful Orchestra the ability to rapidly stretch,
>shift, and splice phrase fragments while preserving perfect phrase
>continuity."
>
>Sounds like what they call "additive synthesis" is what they've been
>calling "SMS" in the literature, or at least how it was taught to me
>at school (http://en.wikipedia.org/wiki/Spectral_modeling_synthesis).
>Probably they've made some clever proprietary extension to that and
>gotten it patented.

They = he. You can read his patent here:
http://www.google.com/patents/about?id=nkoIAAAAEBAJ

SMS can be considered a type of additive synthesis, and is also
patented.

>Then again, they also say this:
>
>"RPM technology uses a database of recorded phrases of orchestral
>instruments. These are not recordings of isolated notes but complete
>musical passages that represent all kinds of articulation and
>phrasing... Synful Orchestra uses a set of advanced algorithms to
>search the RPM Phrase Database in real-time for fragments that can be
>spliced together to form this phrase."
>
>So is it even really SMS at all? Although they claim to have a much
>more intelligent algorithm than EastWest's Qlegato and VSL's whatever
>it's called for intelligent articulation placement, it still really
>boils down to using fragments of samples at its core.

There's no waveform data in the instrument. It's entirely
in the parameter space of the synth. The phrases are morphed
in this space, which is the point. Take my word for it, there
are no remotely comparable products on the market, real or
experimental. It's closer to how some concatenative speech
synthesis algorithms work (like the one in Mac OS 10.5), but
even those use wavetable data.

>they're effectively storing the FFT of the sample, which
>is really the same thing as storing the sample itself.

The database is something like two orders of magnitude smaller
than it would be in wavetable form, so I hardly think it's
the same.

-Carl

🔗Mike Battaglia <battaglia01@...>

9/20/2009 1:56:04 AM

> SMS can be considered a type of additive synthesis, and is also
> patented.

I didn't know that. In retrospect, of course it is. Also, of course
it's the guys at Stanford that own the patent. Next time I'll just
assume Julius Smith has already patented whatever cool modern thing
I'm referencing.

> They = he. You can read his patent here:
> http://www.google.com/patents/about?id=nkoIAAAAEBAJ
//
> There's no waveform data in the instrument. It's entirely
> in the parameter space of the synth. The phrases are morphed
> in this space, which is the point. Take my word for it, there
> are no remotely comparable products on the market, real or
> experimental. It's closer to how some concatenative speech
> synthesis algorithms work (like the one in Mac OS 10.5), but
> even those use wavetable data.
//
> The database is something like two orders of magnitude smaller
> than it would be in wavetable form, so I hardly think it's
> the same.

I read through it a bit. It's a very cool idea. As you said, the core
of it is the way he stores the samples, which he stores in a "spectral
vector" format that he also has patented, as well as the method of
turning putting PCM data into this spectral vector format, and his
also patented method of pitch shifting and time scaling the data...
There were at least two more patents I'd have to read through before I
really figured out the root of what he was doing DSP-wise, and at 4 AM
I'm giving up. Nonetheless, the concept of figuring out what
time-varying frequencies are in a signal and storing it as a spectral
vector is very neat indeed.

-Mike

🔗Carl Lumma <carl@...>

9/20/2009 2:15:23 AM

At 01:56 AM 9/20/2009, you wrote:
>> SMS can be considered a type of additive synthesis, and is also
>> patented.
>
>I didn't know that. In retrospect, of course it is. Also, of course
>it's the guys at Stanford that own the patent. Next time I'll just
>assume Julius Smith has already patented whatever cool modern thing
>I'm referencing.

Probably one of Smith's grad students. He's listed as coinventor
on one but not on the other. But, holy heck, Erling Wold is
coinventor on the other!!

Now there's a blast from the past. Old JI Network guy. Wrote
operas and stuff in JI, some neat stuff. I think everything I
have of his is on cassette.

-Carl

🔗Aaron Johnson <aaron@...>

9/20/2009 6:19:44 AM

The mention of "pure sine waves and noise elements" makes me think of the
Loris opcodes in Csound, which were added after the research done by the
CERL Loris group on using bandwidth enhanced noise to do additive
resynthesis. The website is here:

http://www.cerlsoundgroup.org/Loris/

Unfortunately (for them, mostly) they don't post audio examples, so no one
can here how successful and convincing the result is. But the idea is that
naturalism in additive resynthesis is greatly enhanced if one uses not pure
sines, but 'dirty sines' that are modelled by narrow-bandpass noise...in my
understanding, each harmonic then is like a narrow-bandpassed white noise
tone.

It seems that this way of thinking, and/or physical modelling, has very much
been the zeitgeist, and I wouldn't be surprised to find that Synful is just
a slightly extended and conveniently packaged version of these ideas.

AKJ

On Sun, Sep 20, 2009 at 2:25 AM, Mike Battaglia <battaglia01@...>wrote:

> "A single note in Synful Orchestra may be built from three or more
> rapidly spliced RPM phrase fragments. Splicing ordinary PCM sampled
> sounds in this way would create unacceptable warbles and clicks.
> Synful Orchestra uses a patented form of additive synthesis in which
> sounds are generated from combinations of pure sine waves and noise
> elements. This gives Synful Orchestra the ability to rapidly stretch,
> shift, and splice phrase fragments while preserving perfect phrase
> continuity."
>
> Sounds like what they call "additive synthesis" is what they've been
> calling "SMS" in the literature, or at least how it was taught to me
> at school (http://en.wikipedia.org/wiki/Spectral_modeling_synthesis).
> Probably they've made some clever proprietary extension to that and
> gotten it patented.
>
> Then again, they also say this:
>
> "RPM technology uses a database of recorded phrases of orchestral
> instruments. These are not recordings of isolated notes but complete
> musical passages that represent all kinds of articulation and
> phrasing... Synful Orchestra uses a set of advanced algorithms to
> search the RPM Phrase Database in real-time for fragments that can be
> spliced together to form this phrase."
>
> So is it even really SMS at all? Although they claim to have a much
> more intelligent algorithm than EastWest's Qlegato and VSL's whatever
> it's called for intelligent articulation placement, it still really
> boils down to using fragments of samples at its core. How can they
> claim to be using a sample database and at the same time claim to be
> building the sound up from scratch with sines and noise? If they
> instead store samples as a bunch of sine waves that make up the
> sample, then they're effectively storing the FFT of the sample, which
> is really the same thing as storing the sample itself. Maybe they
> store each sample as a mixed a time-frequency representation, to make
> it easier for them to pitch scale and splice different samples
> together, but I'd hardly call that an "additive synthesizer" in the
> same way that an organ is or something. IFFT != additive synthesizer.
>
> Either way, if Synful sounds as good as it does from that technique,
> I'd say that people should give up on waveguides for a while and just
> work on SMS. It has to be way faster computationally than anything
> waveguides are doing.
>
> (Then again, there's always Pianoteq...)
>
> -Mike
>
>
> On Sat, Sep 19, 2009 at 7:49 PM, Daniel Forro <dan.for@...> wrote:
> >
> >
> >
> > I'm sure we all know well what's physical modeling.
> >
> > If you read carefully you will see I didn't mention "physical
> > modeling". Here I was talking generally about "modeling" in the sense
> > of shaping, emulating, forming, simulating, mimicking, setting,
> > programming.... the sound on the base of previous analysis. This
> > analysis doesn't mean necessarily analysis from the point of
> > construction of the musical instrument, but just from the point of
> > its sound.
> >
> > You have used this term in the same sense as me in the beginning of
> > your second sentence before you have jumped to the "physical
> > modeling" :-)
> >
> > Daniel Forro
> >
> > On 20 Sep 2009, at 5:50 AM, Carl Lumma wrote:
> >
> > >
> > > >> Oh yes, my mistake... But if additive resynthesis is done on the
> > > base
> > > >> of previous sound analysis, it can be considered a kind of
> > > modeling.
> > > >>
> > > >Can be other type of synthesis based on the previous sound analysis
> > > >considered a kind of modeling too?
> > > >Milan
> > >
> > > I don't agree with the first statement. Anything which models the
> > > spectrum directly isn't physical modeling. Physical modeling is
> > > the modeling of a sound *source*, which produces a spectrum.
> > > So all additive and wavetable methods are excluded.
> > >
> > > -Carl
> > >
> >
> >
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

[Non-text portions of this message have been removed]

🔗Aaron Johnson <aaron@...>

9/20/2009 6:26:41 AM

By the way, I've done hand-rolled Csound instruments this way. The results
are expressive and 'natural' as long as other elements of naturalism are in
place, like envelopes being exponential, etc. But they are a pain in the ass
to design (the best way is with a text script in Python--there are so many
repeated elements for each harmonic), and since one has to use individual
filters for each harmonic, the results can be computationally expensive. The
computation load decreases greatly if you only use one noise source for all
harmonics. I imagine that the Loris code performs better, being optimised
for the purpose, but I can't confirm this.

I've been meaning to check out the Loris opcodes and hear the results, but
I've been otherwise occupied. It's on my far-off 'to do' list.

AKJ

On Sun, Sep 20, 2009 at 8:19 AM, Aaron Johnson <aaron@...> wrote:

> The mention of "pure sine waves and noise elements" makes me think of the
> Loris opcodes in Csound, which were added after the research done by the
> CERL Loris group on using bandwidth enhanced noise to do additive
> resynthesis. The website is here:
>
> http://www.cerlsoundgroup.org/Loris/
>
> Unfortunately (for them, mostly) they don't post audio examples, so no one
> can here how successful and convincing the result is. But the idea is that
> naturalism in additive resynthesis is greatly enhanced if one uses not pure
> sines, but 'dirty sines' that are modelled by narrow-bandpass noise...in my
> understanding, each harmonic then is like a narrow-bandpassed white noise
> tone.
>
> It seems that this way of thinking, and/or physical modelling, has very
> much been the zeitgeist, and I wouldn't be surprised to find that Synful is
> just a slightly extended and conveniently packaged version of these ideas.
>
> AKJ
>
>
>
> On Sun, Sep 20, 2009 at 2:25 AM, Mike Battaglia <battaglia01@...>wrote:
>
>> "A single note in Synful Orchestra may be built from three or more
>> rapidly spliced RPM phrase fragments. Splicing ordinary PCM sampled
>> sounds in this way would create unacceptable warbles and clicks.
>> Synful Orchestra uses a patented form of additive synthesis in which
>> sounds are generated from combinations of pure sine waves and noise
>> elements. This gives Synful Orchestra the ability to rapidly stretch,
>> shift, and splice phrase fragments while preserving perfect phrase
>> continuity."
>>
>> Sounds like what they call "additive synthesis" is what they've been
>> calling "SMS" in the literature, or at least how it was taught to me
>> at school (http://en.wikipedia.org/wiki/Spectral_modeling_synthesis).
>> Probably they've made some clever proprietary extension to that and
>> gotten it patented.
>>
>> Then again, they also say this:
>>
>> "RPM technology uses a database of recorded phrases of orchestral
>> instruments. These are not recordings of isolated notes but complete
>> musical passages that represent all kinds of articulation and
>> phrasing... Synful Orchestra uses a set of advanced algorithms to
>> search the RPM Phrase Database in real-time for fragments that can be
>> spliced together to form this phrase."
>>
>> So is it even really SMS at all? Although they claim to have a much
>> more intelligent algorithm than EastWest's Qlegato and VSL's whatever
>> it's called for intelligent articulation placement, it still really
>> boils down to using fragments of samples at its core. How can they
>> claim to be using a sample database and at the same time claim to be
>> building the sound up from scratch with sines and noise? If they
>> instead store samples as a bunch of sine waves that make up the
>> sample, then they're effectively storing the FFT of the sample, which
>> is really the same thing as storing the sample itself. Maybe they
>> store each sample as a mixed a time-frequency representation, to make
>> it easier for them to pitch scale and splice different samples
>> together, but I'd hardly call that an "additive synthesizer" in the
>> same way that an organ is or something. IFFT != additive synthesizer.
>>
>> Either way, if Synful sounds as good as it does from that technique,
>> I'd say that people should give up on waveguides for a while and just
>> work on SMS. It has to be way faster computationally than anything
>> waveguides are doing.
>>
>> (Then again, there's always Pianoteq...)
>>
>> -Mike
>>
>>
>> On Sat, Sep 19, 2009 at 7:49 PM, Daniel Forro <dan.for@...> wrote:
>> >
>> >
>> >
>> > I'm sure we all know well what's physical modeling.
>> >
>> > If you read carefully you will see I didn't mention "physical
>> > modeling". Here I was talking generally about "modeling" in the sense
>> > of shaping, emulating, forming, simulating, mimicking, setting,
>> > programming.... the sound on the base of previous analysis. This
>> > analysis doesn't mean necessarily analysis from the point of
>> > construction of the musical instrument, but just from the point of
>> > its sound.
>> >
>> > You have used this term in the same sense as me in the beginning of
>> > your second sentence before you have jumped to the "physical
>> > modeling" :-)
>> >
>> > Daniel Forro
>> >
>> > On 20 Sep 2009, at 5:50 AM, Carl Lumma wrote:
>> >
>> > >
>> > > >> Oh yes, my mistake... But if additive resynthesis is done on the
>> > > base
>> > > >> of previous sound analysis, it can be considered a kind of
>> > > modeling.
>> > > >>
>> > > >Can be other type of synthesis based on the previous sound analysis
>> > > >considered a kind of modeling too?
>> > > >Milan
>> > >
>> > > I don't agree with the first statement. Anything which models the
>> > > spectrum directly isn't physical modeling. Physical modeling is
>> > > the modeling of a sound *source*, which produces a spectrum.
>> > > So all additive and wavetable methods are excluded.
>> > >
>> > > -Carl
>> > >
>> >
>> >
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
> --
>
> Aaron Krister Johnson
> http://www.akjmusic.com
> http://www.untwelve.org
>
>

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

[Non-text portions of this message have been removed]

🔗touchedchuckk <BadMuthaHubbard@...>

9/20/2009 7:02:50 AM

--- In MakeMicroMusic@yahoogroups.com, Rick McGowan <rick@...> wrote:
>
> --- In MakeMicroMusic@yahoogroups.com, Daniel Forro <dan.for@> wrote:
> > > Anyway all this emulation of acoustic instruments by electronic means
> > > is ridiculous. Electronic instruments should produce innovative
> > > electronic sounds.
> >
>
> Hmmm. Right.
>
> For me the main reason for using electronic means of orchestral
> production is to hear a mock-up of what I'm writing in some semblance of
> what it might sound like if played by a real orchestra. On the planet
> where I live, mere mortals can't just have a full orchestra at their
> beck and call to perfectly perform their work at any time of day or
> night. Especially while it's still work-in-progress.
>
> "Innovative electronic sounds" have their place, of course, but I think
> there's room in the world for more than one opinion.
>
> Rick
>

Beyond hearing what it would sound like, that is, for finished products, if you had the means to produce any given piece in any way you chose, might there be circumstances in which you would choose a sophisticated software emulation of an orchestra over a lot of world-class performers with acoustic instruments, and perhaps a world-class director? Assuming the sky is the limit with either.

-Chuckk

🔗touchedchuckk <BadMuthaHubbard@...>

9/20/2009 7:15:56 AM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Chuckk wrote:

> >BTW, speaking of libraries, you mentioned QT before; I avoided QT
> >because it can only be used in proprietary software by paying for a
> >special license, and I'm not completely decided against writing
> >proprietary software someday. Next GUI project I do, though, will use
> >WxWidgets, which is far more dependable and cross-platform than what I
> >used, TkInter.
>
> What sort of license-foo is that? WxWidgets are the de facto
> standard for Python, but they're much less capable than QT.
>
> http://qt.nokia.com/products/licensing
>
> Looks like you could do LGPL for free. Is that not an acceptable
> compromise?

It's just that I might later want to use the same knowledge for evil (proprietary), and with QT that means either investing money or learning yet another toolkit. I may or may not go the proprietary route some day, but I don't want the fact that I've invested time in learning one GUI toolkit instead of another to be part of my reason for going proprietary or not.
TkInter actually ships as part of Python, which is one step simpler than WxWidgets, but it bites on Mac. It's also not so smoothly integrated into C++ as WxWidgets, and if I can muster the courage I may create something in C++ as well.

-Chuckk

🔗Carl Lumma <carl@...>

9/20/2009 10:02:42 AM

At 06:19 AM 9/20/2009, you wrote:
>The mention of "pure sine waves and noise elements" makes me think of the
>Loris opcodes in Csound, which were added after the research done by the
>CERL Loris group on using bandwidth enhanced noise to do additive
>resynthesis. The website is here:
>
>http://www.cerlsoundgroup.org/Loris/
>
>Unfortunately (for them, mostly) they don't post audio examples, so no one
>can here how successful and convincing the result is.

You can download Loris. Also, SPEAR incorporates many of the
methods used (most importantly, the spectral reassignment FFT method).
SPEAR is a free, polished end-user application.

-Carl

🔗Carl Lumma <carl@...>

9/20/2009 10:04:56 AM

Chuckk wrote:

>> What sort of license-foo is that? WxWidgets are the de facto
>> standard for Python, but they're much less capable than QT.
>>
>> http://qt.nokia.com/products/licensing
>>
>> Looks like you could do LGPL for free. Is that not an acceptable
>> compromise?
>
>It's just that I might later want to use the same knowledge for evil
>(proprietary), and with QT that means either investing money or
>learning yet another toolkit.

Can't you sell your warez with Qt in LGPL mode? That's my
understanding. This option was only made available in Jan of
this year.

-Carl

🔗touchedchuckk <BadMuthaHubbard@...>

9/21/2009 3:06:31 AM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:
>
> Chuckk wrote:
>
> >> What sort of license-foo is that? WxWidgets are the de facto
> >> standard for Python, but they're much less capable than QT.
> >>
> >> http://qt.nokia.com/products/licensing
> >>
> >> Looks like you could do LGPL for free. Is that not an acceptable
> >> compromise?
> >
> >It's just that I might later want to use the same knowledge for evil
> >(proprietary), and with QT that means either investing money or
> >learning yet another toolkit.
>
> Can't you sell your warez with Qt in LGPL mode? That's my
> understanding. This option was only made available in Jan of
> this year.

Sorry, took a while for that to sink in. A major change I hadn't heard about, that doubles the length of my list of potential toolkits!
-Chuckk

🔗touchedchuckk <BadMuthaHubbard@...>

9/21/2009 5:23:44 AM

--- In MakeMicroMusic@yahoogroups.com, Aaron Johnson <aaron@...> wrote:

> Csound already supports soundfonts, so I don't see why this is an
> issue. The older depricated ones in particular are very microtonally
> flexible--the newer ones (fluid opcodes) are not, however.

This works for me, substituting the path to your sf2 file. It's not very clearly explained in the manual, but fluidControl takes LSB first and MSB second as arguments for pitch bend messages:

<CsoundSynthesizer>
<CsOptions>

</CsOptions>
<CsInstruments>
sr=44100
ksmps=16
nchnls=2

gifl1 fluidEngine
gisfnum fluidLoad "/usr/share/sounds/sf2/FluidR3_GM.sf2", gifl1

instr 11
idur = p3
inote = p4
ivel = p5
ibend = p6
ibend1 = ibend & 127
ibend2 = ibend >> 7
print ibend
print ibend1
print ibend2
fluidProgramSelect gifl1, 1, gisfnum, 0, 0
fluidControl gifl1, 224, 1, ibend1, ibend2
fluidNote gifl1, 1, inote, ivel

al, ar fluidOut gifl1
outs al* 0dbfs, ar*0dbfs

endin

</CsInstruments>
<CsScore>
i11 0 .5 60 100 0
i11 + . 60 100 1024
i11 + . 60 100 2048
i11 + . 60 100 3072
i11 + . 61 100 0
e
</CsScore>
</CsoundSynthesizer>

🔗touchedchuckk <BadMuthaHubbard@...>

9/21/2009 5:23:58 AM

--- In MakeMicroMusic@yahoogroups.com, Graham Breed <gbreed@...> wrote:
>
> touchedchuckk wrote:
>
> > I haven't forgotten what you said about the soundfonts.
> > Csound has Fluidsynth opcodes I could try; do you have a
> > soundfont that shows the difference well and will fit in
> > an email to me?
>
>
> The Fluid opcodes are not microtonal friendly. But somebody
> said recently that the stand-alone FluidSynth does have
> tuning tables. So you could get some microtonality by
> exposing them to Csound -- but still not the ideal solution
> for JI.

I can send pitch-bend messages to FluidSynth engines in Csound using fluidControl. See my reply to Aaron in this thread for an example. Is that what you meant?

-Chuckk

>
> There are some other SoundFont opcodes but they have their
> own problems.
>
>
> Graham
>
>
> p.s. I'm interested in what you're doing but not actively
> following it now.
>

🔗Aaron Johnson <aaron@...>

9/21/2009 9:15:50 AM

I'm not sure what Graham means by "exposing them to Csound", but what I
meant was that the newer Fluid opcodes are not as micro-friendly as the
older 'sfplay' etc. opcodes....the only way I know of is using pitch bend,
which IMO, removes the point of using Csound's flexibility.

AKJ

On Mon, Sep 21, 2009 at 7:23 AM, touchedchuckk
<BadMuthaHubbard@...>wrote:

> --- In MakeMicroMusic@yahoogroups.com, Graham Breed <gbreed@...> wrote:
> >
> > touchedchuckk wrote:
> >
> > > I haven't forgotten what you said about the soundfonts.
> > > Csound has Fluidsynth opcodes I could try; do you have a
> > > soundfont that shows the difference well and will fit in
> > > an email to me?
> >
> >
> > The Fluid opcodes are not microtonal friendly. But somebody
> > said recently that the stand-alone FluidSynth does have
> > tuning tables. So you could get some microtonality by
> > exposing them to Csound -- but still not the ideal solution
> > for JI.
>
> I can send pitch-bend messages to FluidSynth engines in Csound using
> fluidControl. See my reply to Aaron in this thread for an example. Is that
> what you meant?
>
> -Chuckk
>
>
>
> >
> > There are some other SoundFont opcodes but they have their
> > own problems.
> >
> >
> > Graham
> >
> >
> > p.s. I'm interested in what you're doing but not actively
> > following it now.
> >
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>

--

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

[Non-text portions of this message have been removed]

🔗touchedchuckk <BadMuthaHubbard@...>

9/24/2009 9:05:33 AM

--- In MakeMicroMusic@yahoogroups.com, Aaron Johnson <aaron@...> wrote:
>
> I'm not sure what Graham means by "exposing them to Csound", but what I
> meant was that the newer Fluid opcodes are not as micro-friendly as the
> older 'sfplay' etc. opcodes....the only way I know of is using pitch bend,
> which IMO, removes the point of using Csound's flexibility.

Well I'm using Csound because of its flexibility, but it's far simpler to do everything- OSC, Soundfont, and Csound orc- directly through Csound than to import other modules and then try to synchronize the notes exactly between Csound and some Soundfont library. I've had a complaint that the way the soundfont support works is inadequate, so I'm wondering if the Fluid opcodes will be more complete. Any idea?

-Chuckk

🔗Aaron Johnson <aaron@...>

9/24/2009 12:50:01 PM

On Thu, Sep 24, 2009 at 11:05 AM, touchedchuckk <BadMuthaHubbard@...
> wrote:

I've had a complaint that the way the soundfont support works [sic: in
> Csound] is inadequate, so I'm wondering if the Fluid opcodes will be more
> complete. Any idea?
>

I'm curious the nature of the complaint? What feature was/is missing?

In any event, I'm sure we can agree that having to use pitchbend to control
the FluidSynth opcodes is less than ideal....especially given the decreased
resolution and control.

If you must use them, perhaps it's possible that they don't have the channel
limitation issues of pure MIDI? ie, maybe one can have a front-end
instrument that can be accessed in score-file CPS format, sending 'event'
signals using the 'event' or 'event_i' opcodes, which would send a MIDI note
number and pitch-bend to the 'under-the-hood' instrument? Worth a try, I
suppose.

It's lamentable that the developers of Csound lapsed on the generality that
is Csound when they ported the Fluid libraries....at least, pitch-wise.

Aaron Krister Johnson
http://www.akjmusic.com
http://www.untwelve.org

[Non-text portions of this message have been removed]

🔗cameron <misterbobro@...>

9/29/2009 4:19:47 AM

--- In MakeMicroMusic@yahoogroups.com, Carl Lumma <carl@...> wrote:

>
> In summary, though I may even perversely prefer a homebrew
> soundfont to GPO, I might prefer even more a simple orchestra of
> triangle waves. That's how much superficial realism I'm willing
> to give up to get underlying realism (musical expression).
> I realize not everyone is willing to go that far, and I promise
> not to criticize them for the next 6 months, starting... now.
>
> -Carl
>

I agree with this general take on synthesis and sampling.

🔗touchedchuckk <BadMuthaHubbard@...>

10/7/2009 9:45:14 PM

--- In MakeMicroMusic@yahoogroups.com, Aaron Johnson <aaron@...> wrote:
>
> On Thu, Sep 24, 2009 at 11:05 AM, touchedchuckk <BadMuthaHubbard@...
> > wrote:
>
> I've had a complaint that the way the soundfont support works [sic: in
> > Csound] is inadequate, so I'm wondering if the Fluid opcodes will be more
> > complete. Any idea?
> >
>
> I'm curious the nature of the complaint? What feature was/is missing?

It was in this thread, from Cody. He says that Soundfonts with multiple samples for different velocities only appear to use one. I don't have a good Soundfont to test with, so it could well be something in Rationale.

-Chuckk

🔗Chris Vaisvil <chrisvaisvil@...>

10/8/2009 4:46:13 AM

I have gigabytes of soundfonts you can download

http://clones.soonlabel.com/public/sfbank

Have at it!

On Thu, Oct 8, 2009 at 12:45 AM, touchedchuckk
<BadMuthaHubbard@...>wrote:

>
>
>
>
> --- In MakeMicroMusic@yahoogroups.com <MakeMicroMusic%40yahoogroups.com>,
> Aaron Johnson <aaron@...> wrote:
> >
> > On Thu, Sep 24, 2009 at 11:05 AM, touchedchuckk <BadMuthaHubbard@...
> > > wrote:
> >
> > I've had a complaint that the way the soundfont support works [sic: in
> > > Csound] is inadequate, so I'm wondering if the Fluid opcodes will be
> more
> > > complete. Any idea?
> > >
> >
> > I'm curious the nature of the complaint? What feature was/is missing?
>
> It was in this thread, from Cody. He says that Soundfonts with multiple
> samples for different velocities only appear to use one. I don't have a good
> Soundfont to test with, so it could well be something in Rationale.
>
> -Chuckk
>
>
>

[Non-text portions of this message have been removed]

🔗cameron <misterbobro@...>

10/8/2009 4:53:29 AM

Yikes, hahahaha! Well thanks. I like these homemade things such as your barge bits, but generally I don't use sample sets other than Kirk Hunter orchestra and sometimes percussion samples for manipulation. Or animal samples, whales, gibbons and alligators.

--- In MakeMicroMusic@yahoogroups.com, Chris Vaisvil <chrisvaisvil@...> wrote:
>
> I have gigabytes of soundfonts you can download
>
> http://clones.soonlabel.com/public/sfbank
>
> Have at it!
>
> On Thu, Oct 8, 2009 at 12:45 AM, touchedchuckk
> <BadMuthaHubbard@...>wrote:
>
> >
> >
> >
> >
> > --- In MakeMicroMusic@yahoogroups.com <MakeMicroMusic%40yahoogroups.com>,
> > Aaron Johnson <aaron@> wrote:
> > >
> > > On Thu, Sep 24, 2009 at 11:05 AM, touchedchuckk <BadMuthaHubbard@
> > > > wrote:
> > >
> > > I've had a complaint that the way the soundfont support works [sic: in
> > > > Csound] is inadequate, so I'm wondering if the Fluid opcodes will be
> > more
> > > > complete. Any idea?
> > > >
> > >
> > > I'm curious the nature of the complaint? What feature was/is missing?
> >
> > It was in this thread, from Cody. He says that Soundfonts with multiple
> > samples for different velocities only appear to use one. I don't have a good
> > Soundfont to test with, so it could well be something in Rationale.
> >
> > -Chuckk
> >
> >
> >
>
>
> [Non-text portions of this message have been removed]
>