Sorry for the blockquote again but checkout Mark McCahills Blog.
Besides working on better looking avatars, my group has been exploring easy ways to get robots into Croquet.
If we are going to build synthetic worlds, it makes some sense to populate them with both live people and simulated characters. The simulated characters can be a sort of automated tour guide, among other things. Beyond the obvious, record-able avatars give you a way to capture the interactions in a social setting and play them back for later study. Even better, during playback, you can hop from one avatars viewpoint to another, and do things like replace the audio track. This fits in nicely with the work we are doing for the immersive language instruction project. Being able to watch a scenario from different viewpoints, play it back and reflect on what is happening is powerful. I keep realizing that this sort of mutable time is at least as important as the 3D visuals for creating learning environments.
The architecture of Croquet makes it straightforward to capture all the messages for a given avatar as you drive the avatar around the space, and then later inject those messages into a robot avatar to do the playback of what happened. So… a sort of robot by recording live interactions was pretty easy. What helped a lot was realizing that this could be thought of as a virtual multitrack recording session, but instead of music, we are recording sound, motion, and gesture. I need to get some of the dance and theater people interesting in all this…
This verges on desktop animation movie making but with the difference that you can walk through the world that the movie is taking place in while it is playing. So instead of just using the synthetic world to create movies (which is the normal approach to machinima) we can do much more interesting things by NOT dumbing the virtual world down into a non-interactive movie.
Next steps: getting a way to edit/tweak the motion/sound that we record/playback, getting a programmatic scripting method in place to augment the recorded motions/sounds. I still need to create the immersive Croquet world version of the immersive language demo movie we recorded a while ago, but now the tools are fairly functional. Our user interface is still clumsy to use, but overall this feels like a good approach to making it possible for anyone to create robots/NPCs. But maybe we should call them Croquet zombies instead?