As Inti Einhorn mentioned in his presentation (which was great, man--I've had several conversations based on it in just the past couple of days,) getting the Wii to work requires training. That'll be true of anything we build, too, especially w/r/t anything gestural.
With that in mind, I would strongly recommend that we look into neural networks as a way to train the machines. It sounds a little intimidating, but it really isn't. In fact, it ties in very closely to the reading about logic gates we've been doing so far.
IBM has a pretty good introduction to NN. In particular, read Threshold logic units (TLUs) and How a TLU Learns. The rest is pretty math heavy--though it's still good reading, if you're up for it. You can recreate AND, OR, XOR, etc. gates using neural networks pretty easily.
The real fun, though, would be to train a network by moving our controllers in a particular pattern, over and over, and running that data through the network until it figured out that the slashing motion we make should output a slashing motion to the screen.
Okay, so I'm wool gathering for a second here, but bear with me. What if the inputs in a network were, say 24 neurons. Three neurons would read each axis of the controller for an eighth of a second. Assuming we can capture a whole gesture in one second, the network would take eight "slices" of the action and use those to judge what motion has been made. Maybe there's a minimum threshold on the acceleration, too, so that the network doesn't have to consider neutral positions, only the ones that are interesting.
I'll keep noodling with this idea. I really ought to go out and pick up a WiiMote (if not a whole Wii) to see how well I can train that with the software I've got already.
If you'd like to leave comments, you can log in using your A server username and password. Props to Dave D. for making that work department wide.
Copyright Mike Edwards 2006-2009. All content available under the Creative Commons Attribution ShareAlike license, unless otherwise noted.