back to list

phidgetsusa.com

🔗harold_fortuin <harold@...>

4/25/2005 10:10:41 PM

A rarity in my bulk e-mail folder: an actually interesting message,
this one promoting
phidgetsusa.com

Using very simple code in Visual Basic or other languages running on
Windows OS'es, with their software/hardware you can control motors,
levers, read sensor data, etc.

One example project uses servos to play a small xylophone.

I have not used these, but the technology looks promising for all of
us who failed to complete years of electronics courses, but who
nonetheless might wish to get input/output from the real world.

Should be useful for anyone building new instruments, or wanting
computer control over mechanical instruments. And the products are cheap.

🔗Catharsis <catharsis@...>

4/26/2005 11:15:06 AM

At 10:36 AM 4/26/2005, you wrote:
>I have not used these, but the technology looks promising for all of
>us who failed to complete years of electronics courses, but who
>nonetheless might wish to get input/output from the real world.
>
>Should be useful for anyone building new instruments, or wanting
>computer control over mechanical instruments. And the products are cheap.

Yep... I built a prototype mixer for spatial surround panning with Phidgets:
http://research.egrsoftware.com/hardware/protomix1/

I am eventually going to get a machined case, powder coated, better joysticks, etc. version done.

My next big goal with hardware is to actually break into the FPGA world and create a cheap and highly featured OSC to hardware interface. Err.. This requires an extra day and a half a week for me to pull off right now though..

Phidgets are fun and easy to prototype with though..

Best,
--Mike

Founder & Lead Developer; EGR Software
http://www.egrsoftware.com

🔗Carl Lumma <ekin@...>

4/26/2005 2:00:50 PM

>Best,
>--Mike
>
>Founder & Lead Developer; EGR Software
>http://www.egrsoftware.com

Heya Mike,

EGR software sounds interesting. But you might want to change
"is" to "are" on your mainpage...

"Technologies including the Typhon framework, Scream, and
Auriga3D is enabling"

-Carl

🔗Catharsis <catharsis@...>

4/27/2005 1:22:08 PM

At 09:32 AM 4/27/2005, you wrote:
>EGR software sounds interesting. But you might want to change...

Thanks for the heads up... I've been meaning to do a site overhaul soon and get more appropriate front page info/graphics up.. :) It's on the list.. somewhere.. An extra set of eyes really helps, so if you see anything else please do holler!

I've had to take a day job to obtain funding and continue moonlighting at night, so its been hectic and slower going to some extent. Sun pulled a nasty on me and denied my JavaOne presentation this year even though it was way cutting edge. I'm fairly certain there won't be others there presenting anything close to what I'm doing (its mostly enterprise business crap at JavaOne/Sun!). This hurts a small company like me _big time_ in regard to obtaining contracts; hence the day job now. I've been slowly negotiating contracts from my public speaking outings regarding EGR tech; things are still very much in the becoming stage.

A brief snapshot of some audio software I plan to complete this year is my 2nd gen spatial software that is fully 3D enabled for large sound arrays. Beyond typical spatial trajectories I have a firm concept of being able to do volumetric spatialization. IE. defining static/dynamic 3D volumes that constrain granular synthesis techniques and or sound events (throw in some physics or generative movement as well). I'd very much like to hear what a morphing microtonal cloud of sound events would sound like on a 3D array.. The add a few more layers and err rotate them around.. One can imagine at this point. :) All of this clearly visualized through using the latest 3D graphics tech available.

I'm still allied with Asphodel in SF and they just built a new facility with a 16.8.1 sound system that is located in downtown SF. The extra .1 refers to a bunch of subs underneath the floor.. :) I'm tentatively planning on getting my software in there towards the end of the year into next and run live only performances and bring in folks ranging from edgy dance music to the esoteric extremes for full spatial madness.. :)

Anyway.. The Phidgets kits are fun and easy to use. A very good intro to working / interfacing with hardware. They provided a good 1st step for some of the ideas I have for creating an A/D OSC network interface.

Best,
--Mike

Founder & Lead Developer; EGR Software
http://www.egrsoftware.com

🔗Carl Lumma <ekin@...>

4/27/2005 2:49:31 PM

>I've had to take a day job to obtain funding and continue moonlighting
>at night, so its been hectic and slower going to some extent. Sun
>pulled a nasty on me and denied my JavaOne presentation this year even
>though it was way cutting edge. I'm fairly certain there won't be others
>there presenting anything close to what I'm doing (its mostly enterprise
>business crap at JavaOne/Sun!).

Teh sux. Sun just can't stop shooting itself in the foot, it seems.

>A brief snapshot of some audio software I plan to complete this year
>is my 2nd gen spatial software that is fully 3D enabled for large sound
>arrays.

I'd love to hear more about this. Write me off-list at carl at
lumma dot org. Or keep it on-list if you can tie it on-topic for MMM.
Or take it over to Tuning.

>Beyond typical spatial trajectories I have a firm concept of being
>able to do volumetric spatialization. IE. defining static/dynamic 3D
>volumes that constrain granular synthesis techniques and or sound
>events (throw in some physics or generative movement as well). I'd
>very much like to hear what a morphing microtonal cloud of sound events
>would sound like on a 3D array.. The add a few more layers and err
>rotate them around.. One can imagine at this point. :) All of this
>clearly visualized through using the latest 3D graphics tech available.

I'd definitely like to hear more about this, if you have a whitepaper
or something...

>I'm still allied with Asphodel in SF and they just built a new facility
>with a 16.8.1 sound system that is located in downtown SF.

I'm writing this from 1 Market, 39th floor. Wanna 'do lunch' sometime?
:)

-Carl

🔗Catharsis <catharsis@...>

4/30/2005 3:43:04 PM

At 09:44 AM 4/28/2005, you wrote:
Re: JavaOne
>Teh sux. Sun just can't stop shooting itself in the foot, it seems.

Yes.. limited vision.. If it isn't enterprise or the presenter has a big presence, or you are inside at Sun getting a full floor technical session seems impossible. I was very disappointed with the papers selected this year.. Absolutely nothing cutting edge from sources outside of Sun. The only cool BOF (night session) is on a couple of folks that are using a cell phone to control drum machine software.

To be fair I sent in a proposal for a 500 person full technical presentation and not a small 50 person Birds of a Feather night session. I have access to a large mobile 6 to 8 channel sound system and really wanted to unleash a good deal of my work on a decent audience rather than a handful. It would have been criminal to deny my proposal as a BOF. I figured with my speaking record and what I'm doing it would be a "shoe in".. I did a BOF last year that went really well.

>I'd love to hear more about this.
>I'd definitely like to hear more about this, if you have a whitepaper
>or something...

The most interesting aspect of my work is that it is highly informed via the computer graphics discipline. From concepts applied to CG that are appropriate for new audio techniques to just the math involved.

Ever since I used a Kyma system back in 2000 I recognized that the most limiting aspect of the real time synthesis class environments is the lack of a detailed and expressive interface between the user and the system. SuperCollider3 in particular. SC3 is so capable of representing and handling advanced 3D spatialization, but there is no interface that can give the user a solid grasp/control over the situation.

Yes, not too much tuning specific stuff besides the microtonal clouds so I'll be brief :). You can imagine visualizing a static or morphing 3D shape/volume with colored points (sprites) contained inside of it. The color of the points could represent frequency to other parameters like duration of event and can change dynamically. You could even switch between multiple representations or have multiple views on one screen.

The real cool thing is that because I'm using Ambisonics is that all the spatial data is basically in a transform space where you can sub group and rotate (and do other effects) one or more points / sound events independently and or the containing 3D shape.

I have my ICMC 2004 whitepaper that is outdated available here http://research.egrsoftware.com/whitepapers/. This mostly covers my 1st gen tech.

I was going to write another one covering my current progress for ICMC 2005, but got depressed in realizing how much $$$ it would have cost me to present the paper not being affiliated with an institution. It cost me an arm and a leg to just do 2004 and to be honest I got screwed by the organizers in a major way (longer story). Folks from the 2005 committee were there and were sympathetic and I do believe 2005 is going to be way better; it is just in Barcelona though.. doh..

The latest info although it is not verbose is my proposal for JavaOne and the basic write ups that I need to flush out on my web site:
http://www.egrsoftware.com/javaone2005/

I have yet to really get solid info up on the middleware audio engine for 3D game development that I am working on.. That is my main commercial push and hopefully I'll get it out in 2006. Note: Wouldn't it be cool if developing musicians software could actually provide an income!

Its going to be a lot more rough during my moonlighting phase in regard to time, but I'm definitely going to keep EGR Software alive.

Err.. Yes.. Not very microtonal related, but I do keep this area of theory/application in mind constantly.

Best,
--Mike

Founder & Lead Developer; EGR Software
http://www.egrsoftware.com

🔗Carl Lumma <ekin@...>

5/1/2005 2:20:48 PM

>Ever since I used a Kyma system back in 2000 I recognized that the most
>limiting aspect of the real time synthesis class environments is the lack
>of a detailed and expressive interface between the user and the system.

You can say that twice and mean it.

>Yes, not too much tuning specific stuff besides the microtonal clouds so
>I'll be brief :). You can imagine visualizing a static or morphing 3D
>shape/volume with colored points (sprites) contained inside of it. The
>color of the points could represent frequency to other parameters like
>duration of event and can change dynamically. You could even switch
>between multiple representations or have multiple views on one screen.

This is pretty broad. Do you have any mock-ups?

>I have my ICMC 2004 whitepaper that is outdated available here
>http://research.egrsoftware.com/whitepapers/. This mostly covers my
>1st gen tech.

So would you use Scream to build the interfaces described above?

>The latest info although it is not verbose is my proposal for JavaOne
>and the basic write ups that I need to flush out on my web site:
> http://www.egrsoftware.com/javaone2005/

Nice.

>I have yet to really get solid info up on the middleware audio engine
>for 3D game development that I am working on.. That is my main
>commercial push and hopefully I'll get it out in 2006. Note: Wouldn't
>it be cool if developing musicians software could actually provide an
>income!

Ableton and Native Instruments seem to be doing well.

>Err.. Yes.. Not very microtonal related, but I do keep this area of
>theory/application in mind constantly.

Rock on!

-Carl

🔗Catharsis <catharsis@...>

5/2/2005 1:24:46 PM

Apologies for hijacking the phidgets thread.. :)

At 10:43 AM 5/2/2005, you wrote:
Re: 3D spatial interface.
>This is pretty broad. Do you have any mock-ups?

Not at present.. My work on 3D gaming tech (Quake3 engine) and the 1st gen GUI is highly informing me on this, so I know it is possible. I could make a mock-up in a 3D package at this time, but it would be nice to have a dedicated artist to do that.. So it awaits implementation. The rub is that basically I am essentially going to be implementing my own GUI core in OpenGL directly.

In respect to Java tech available I am implementing a GUI environment like Project Looking Glass (http://www.sun.com/software/looking_glass/), but based on a much more efficient and not backwards compatible (re: need latest gen hardware ATI x800+ or NVidia 6xxx+ graphics cards). Project Looking Glass is based on "inferior" tech (Java3D) and is not efficient or uses the latest graphics tech; this is also why it will run in more places than my proposed system. Typhon has the potential to be a modern desktop programming environment.

IE most developers use existing GUI APIs. To follow my vision I must complete core technology before creating the 2nd gen work. Now I wished I could do it fulltime and yeah all of this is a bit staggering for 1 dev!

>>This mostly covers my1st gen tech.
>So would you use Scream to build the interfaces described above?

The paper was outdated months before ICMC'04 :). I have since split Scream into a couple projects. Typhon and other components like Scream. Typhon is the main runtime engine and graphics resources (this is alluded to in the above whitepaper as part of Scream) and Scream is now just a service level plugin/component of Typhon that enables additional features to app developers who want to add communication to SC3 and or other OSC/MIDI features to their apps.

The base tech Typhon is suitable to build applications beyond just music apps. Games and other performance GUI oriented software can just leverage Typhon for instance.

Typhon is a runtime environment based on the service oriented architecture concept. Services such as Scream can be made available at a lower level than most of the programmers who will use the environment to build software, etc. Programmers request Scream functionality and then hook it up with say a GUI, etc.

>> http://www.egrsoftware.com/javaone2005/
>Nice.

This would have taken me a good 3-4 months fulltime+ to get going. Passion drove me far enough and I could have had this direction realized by June on _some_ hardware. Problem is to actually release it I need to make it work on most recent hardware and hence my dev environment needs funding -> job -> less time. Kind of open ended when this direction will be taken further at this point.

I am kind of hoping I keep tabs with the O'Reilly folks and they invite me back to Foo Camp where I will try to have some of this ready to demo.

>Ableton and Native Instruments seem to be doing well.

Ableton at least is making software that takes a concept (looping performance) perfected by musicians using hardware (drum machines / samplers) in the early '90s and provides software that does essentially the same thing but brings that capability into reach of everyone and their mother with a laptop.

NI has done more "cutting edge" work with environments like Reaktor. The rub though is that to make money (re: living) you must target your work at the status quo; hence NI went on to make standard audio software beyond Reaktor. I am guessing that they make a lot more money now from their standard software rather than from Reaktor.

In the medium run (5-15 years) I believe the game industry will be more receptive and have more funding for advanced audio techniques rather than the musician market.

More to mention in this direction... but I'll save it.. :)

Best,
--Mike

Founder & Lead Developer; EGR Software
http://www.egrsoftware.com