The first Second Life Views meeting was a huge success. We had four discussions, attended by the Panel and various Linden Lab staff. I'm going to post the notes here over the next few days. Please feel free to comment, and we'll continue the different discussions in-world, on mail lists, or in the forums as appropriate.
The first discussion I want to highlight was led by Ventrella Linden, who demonstrated a new avatar technology we're calling Physical Avatar. Basically this technology allows for in-world posing of avatars, and expressive puppeteering, along with physical, rag-doll effects.
Naturally the discussion leaped from posing to full animations, and incuded questions about the road map for further development, thoughts about the impact on the SL animation market, and ideas for more features. Of particular interest to the Residents involved was where we might go with scripting of animations.
We have additional questions for those of you interested in scripting and animation. If you'd like to participate in the conversation, please join the Scripter's mail list.
Discussion Notes: Physical Avatar
what is it?
- in world poser
- expressive puppeteering
- explicit world coordinate system representation
- allows for physics/rag doll physics; uses spring physics. compensate for too much springiness with added constraints to create a pseudo-rigid body
- largely viewer side with packet system allowing for server to control ordering
- in the future: collaborative animations (pose me -> ok)
facial expressions and fingers are not included yet
need a file format that supports morphs; .bvh only good for skeletal positioning
LSL scripting would open this up; could determine key points on avatars that let them sync up to shake hands for example
can currently composit animations but most people don't do that
.bvh will be compatible with inworld created anims
- need to announce this in advance for people who have created animations; shouldn't hurt the AO market which is open source
- allows more people to experiment with animations; poser anims are still superior
- test with AOs
- from economic side inform animation industry that this is coming to make sure market
- spontaneous animations won't compete with key frames
- one key element is the ability to add constraints so for example if you're riding a motorcycle your hands are in the right place
interest in LSL controls to set constraints
initial plan is to allow people to make an anim that works across body types
time frame for coordinating animations between avatars depends on having constraints in for individual avs; then we need to create the UI for both using and agreeing on anim
can build an animation with objects in mind and make it work within a specific environment using tags that attach specific points on the av to the object (e.g. hands to motorcycle handlebars)
what's the impact on client/server load? 144bps relatively uncompressed so can probably get it down to about 25% of this size
UI still in progess; want to make it as direct and spontaneous as possible, so just using the CTRL key to move the av from key attach points. from there will plug in additional features
scripting will open it up for content creation in a tremendous way; scripting to 'get' position as well as 'set' in order to unite two avatars in a hand shake or hold hands move; need to get all joints accessible in LSL asap
feature set that developers will use to deliver to packages to their customers
need to build a testing model for developers; including a road map for what order new features are coming in
LL opinion: expressive asap (pose+save) this is a content creator mindset; just to pose is a more casual user mindset
- release animation editor and LSL in tandem for it to be useful; also this opens up content creation to a more casual user (good thing!)
- ctrl+shft allows you to rotate on two planes (one suggestion is being able to select multiple joint points to say, shrug shoulders)
- would like it to work similarly to camera controls, although these aren't used by casual users
LL opinion: Critically important to test this for usability with a broad spectrum of users;
Question: can this be tuned more to give default natural behaviors for casual users who aren't used to working with anim tools and compensating for tool based eccentricity; LL response - there is a skill based gradient here as well
- releasing as demo first is a big departure from what we've done before but it would be a good intro to content creators for what will be possible; risk introducing it to casual users is that it doesn't meet their expectations (also speaks to testing)
LL opinion: the ability to take your avatar and make it expressive in real time is very compelling. more natural look can be tuned.
there is a set of manipulation tools you're not seeing that give lots of additional possibility