« Happy Birthday Big Brother | Main | SLCCE? »

November 15, 2005

Comments

blaze

What was the average memory consumption per script for LSL2 as a comparable?

When you say "processor intensive benchmarks" are you referring the ability for us to instrument / investigate our how code? Ie: find out causes of lag?

Jim Purbrick

Blaze, LSL2 uses a single 16k block for the stack, heap and text of every script. It makes it very easy to save, load and ship scripts around, but it wastes memory for very simple scripts which never need 16k and means that complex scripts have to be broken up in to lots of smaller communicating scripts which fit in to the 16k block.

The processor intensive benchmarks are just a few benchmarks taken from the Mono distribution which I used to intially compare the performance of LSL2 and Mono before I could run real SL scripts using Mono. A script which generates the fibonacci series runs 100 times faster on Mono embedded in the SL simulator than on the LSL2 intrepretter in the SL simulator. You don't typically see these 100x speedups with real SL scripts as LSL2 is so slow that very few people write processor intensive scripts.

Blind Wanderer

When the LSL2 compiler writes the bytecode it is actualy creating the memory space too, it's really a cool. How exactly does mono do it's garbage collection? Is switching to mono going to present a major problems with slow garbage collecting? Is llGetFreeMemory going to actualy work?

Jim Purbrick

Blind, I've spent a lot of time profiling Mono now and haven't seen the GC take up any appreciable time so far. Even so, the Mono project are working on making the garbage collection system plugable, so we could potentially move to a garbage collector that more closely matches our needs and on integrating a more advanced precise generational garbage collector.

The way I'm currently going memory accounting is by using the Mono profiling API to catch every memory allocation and then assigning it to the currently running script. Adding the size of the scripts bytecode gives the total amount of memory used and so the free memory available to the script. When a garbage collection occurs the memory used by any freed objects is returned to the scripts that allocated them so it can be used again. If scripts are still over thier limit after a collection, they have run out of memory. I need to add extra logic to stop scripts allocating huge amounts of memory between GCs and to stop them rapidly creating huge amounts of garbage as a denial of service, but I think it's going to work.

Blind Wanderer

you know, it's probably a good thing then that i didn't try to hack the compiler output and make a script that could overwrite it's own bytecode. I was going to do it by using negative addresses, converted strings to integers and writing raw bytes with the global memory write bytecode (local would work too). But since the memory for each script isn't sequential, this would be a bad idea (though writing random data into other scripts does sound fun).

Blind Wanderer

On that thought, how does mono handle memory violations?

Jim Purbrick

Blind, CLR assemblies can be verified before being loaded or run to make sure that they don't contain any corrupt structures or unsafe code, which is what we'll be doing to make sure that Mono scruots don't cause memory violations.

Miguel de Icaza

Am glad to see these early results with Second Life's use of Mono.

Two sets of good news: we should be rolling out in February some advanced optimizations for the Mono JIT which will improve your performance further more. For computationally intensive applications the "-O=all" switch will instruct the code generator to use a set of optimizations typically found on optimizing compilers.

Early next year we should also have a new precise GC that might help you in some scenarios.

And of course, we would be glad to help you tune Mono to meet your performance needs. We have tuned Mono's class libraries based on the needs of our users and there is plenty of low hanging fruit in that area.

Miguel.

Jim Purbrick

Hi Miguel, Great to have you drop by! And good to hear about the upcoming improvements to Mono that will help us out. I'll be posting details about our experiences with Mono and the issues we're currently facing over the next weeks, so please keep checking back!

Chris Omega

Can the Mono profiling API be easily exposed to scripters in-world? Many would benifet from ways to analyse their script's performance and debugging tools.

The comments to this entry are closed.