Subj : Re: Spidermonkey performance To : netscape.public.mozilla.jseng From : "Kumar Iyengar" Date : Mon Apr 12 2004 05:26 pm Thanks for the prompt response. We have a proprietary workflow engine and i was trying to replace some of the workflow with a JavaScript.I have a 'driver' that runs workflow with and without scirpts and measures the total time taken. I ran the tests using the 'one runtime for all threads' versus 'one runtime on a per thread basis' and found that the latter performed about 3 times better. In both cases I have: BeginRequest EvaluateRequest EndRequest GC I also have SuspendRequest and ResumeRequest around native code that blocks. In anycase, the performance i am measuring was between the 2 models. I was hoping that this is an area that has already been investigated and if there were any guidelines or such that i should be following. Thanks, --kumar iyengar "Brendan Eich" wrote in message news:407B1DFB.6040103@meer.net... > Kumar Iyengar wrote: > > I am new to Spidermonkey and was trying to evaluate it with the intention of > > using it within our products and performance is one of our biggest concerns. > > I did some prototyping and have a few questions: > > *) I tried creating one Runtime for all the threads in our product and the > > resulting performance was very slow. I have switched to one Runtime per > > thread model and found that the performance does improve. Is one Runtime per > > thread(or threadpool) the recommended model in a multi-threaded environment? > > > One runtime per thread is not the best model. Why was performance "very > slow", and compared to what? Please quantify your claims. > > > > *) If the above is true then does Spidermonkey have to be built with the > > JS_THREADSAFE flag? In other words, if i am using the library with one > > Runtime per thread model, then can i use js32.dll without compiling it with > > the JS_THREADSAFE flag? > > > Yes, you need JS_THREADSAFE. However, since unshared objects (objects > used by one thread at a time, or ever) use lock-free synchronization > optimizations, I don't believe you can assume that locking at the object > level, which is the dominant effect of turning on JS_THREADSAFE, is to > blame for anything. > > Besides giving your elapsed time measurement comparisons, it would help > if you profiled your embedding to see where it actually spends its time. > Either a pc-sampling profiler such as jprof or gprof, or an > instrumenting one such as IBM/Rational Quantify is good for a first cut. > > > *) Does keeping the compiled version of the script help performance? My > > initial tests seemed to indicate that it does not. > > > If you're recompiling the same script every time you execute it, you are > indeed wasting effort that precompiling can save. Again, without any > numbers, and some kind of profile (even "poor man's pc sampling" where > you ctrl-C the program often under gdb and see what stacks you get), > it's hard to give a useful comment. > > If your scripts are combinatorially or numerically complex, they may > take so long that they dwarf the compiler costs. > > > > *) Does anyone have any other tips on how to improve the performance. > > > Measure twice, cut once. You are far from "cutting". > > /be .