Mobile Technology News & Mobile Fun

All-new test results: What browser will you use to run Web apps?

By Scott M. Fulton, III, Betanews

Banner: Test Results

Three laptop computers, all of them cool-looking, all with well-respected brands, all have the features you want, all sell for the same price. This isn’t going to be a toy for you; it will be, for at least the next few years, the engine for your work and your livelihood. How do you make a purchasing decision? You check online to see which one is the better performer, and which one other customers prefer.

Five Web browsers, all of them cool-looking, all with well-respected brands, all have the features you want, all of them…are free. But this isn’t going to be a newspaper reader or a Twitter feed carrier for you; it will be, for at least the next few weeks, the engine for your work and your productivity. Sure, you’ll install all of them. But which one will you install as your default, and which one will you trust with your everyday applications?

Click here for a complete introduction to the Betanews Comprehensive Performance Index.


When the everyday functionality of Microsoft Office moves to the Web, and PCs are sold not with Office installed but with desktop shortcuts to Office Web Apps instead, and when the applications you run depend on the Web browser you choose, the decision you make about Web browsers will be more important than ever before. If you care about whether an AMD processor that sells for eight dollars less than its Intel counterpart can perform at the same levels under overclocking, or whether adding a second graphics card will crank out ten more frames per second after you add that fourth monitor, then you should care about the performance of your software platform. The differences here are not so incremental.

At Betanews, we’ve been testing Windows-based Web browsers with greater and greater accuracy throughout this year, with the objective of being able to give you a simple and indisputable way to consider their all-around performance. All through that time, we’ve been listening to your responses as to how we can improve our methods, and we’ve been getting a lot of responses.

Here’s what we’ve learned from you:

  • You want a simpler, flatter index. Just as the Dow Jones Industrial Average represents the general state of investment in the American economy for any point in time, you need one number that represents the all-around performance of every browser in the field, something you can remember and discuss.
  • You also want all the data. Specifically, you want to be able to see exactly how that final index number is obtained. Our verbal explanations haven’t always been enough, and you know from personal experience that a browser we’ve called relatively slow is actually faster in the areas that matter specifically to you.
  • You want us to cast our net wider, and find a fairer and more accurate way to assess basic performance. You’ve told us that page load times, to you, represent basic performance — if a page loads faster, it’s a faster browser. But timing how fast a browser loads Yahoo or Facebook, for example, is a process loaded with uncontrolled variables — the pages change, the network ebbs and flows, and ads can be textual or in interactive Java 3D. You need a fair and regulated means for assessing real-world page loading speed.
  • You warned us not to trust browsers’ different methods for reporting their own load times. Since different browsers work in various ways, for example, they fire the JavaScript onLoad event at different times, for different reasons — so we can’t accept onLoad as the complete “finish line.” Instead, we need to pay attention to the whole page loading process, specifically with regard to how soon the scripts inside a page can access the elements of that page and start styling and displaying them.

We already had four tests in our previous Web browser suite, and after careful research, we’ve added four more that focus on areas that you say matter to you, and that some of you say we’ve overlooked. And as usual, you’re right: If we’re going to claim to measure “all-around” performance, we need to cover all the bases.

On the next page, we introduce our new solution and our response to your many, many very good suggestions. This is your comprehensive index.

Next: How Betanews will measure Web browser performance…

The Betanews Comprehensive Relative Performance Index (CRPI)

Click here for a complete introduction to the Betanews Comprehensive Performance Index.


Today, we unveil something very CRPI. It’s an acronym that we hope sticks — yes, I’m calling it the “creepy index,” and you can too. One of the features you told us you wanted was a more thorough explanation of what’s covered, and how our testing works. Now you have it.

Starting today, Betanews will present our newly expanded browser test suite, and our single CRPI index number to represent relative all-around performance on all Windows platforms. The link above will take you to our very exhaustive explanation of how the new test works. For our regular readers, we will continue for a short time to report (but not chart) some browser scores on our old index just for comparison purposes with previous articles.

Slideshow badge (small)

Betanews CRPI 9/21/09And for those of you who want the absolutely complete breakdown of all our scores in each test, now you have that too. The complete set of charts are now available by clicking the picture at right. (You’re welcome.) Now, here are our first set of CRPI results:

Betanews Comprehensive Relative Performance Index September 18, 2009

As you can see, despite everyone’s best efforts to shake things up, the running order of browsers in the race actually has not changed at all. Our CRPI numbers are lesser in value than our old index for a few reasons: We’ve discovered areas where the speed demon browsers we had been reporting on are not really so swift. So you won’t be seeing numbers approaching “20” in our new index, at least for awhile.

Adding rendering tests to our suite did make things better for Opera. Although it’s still behind, the gap between Opera 10 (CRPI: 6.38) and Firefox 3.5.3 RTM (7.35) is not nearly as insurmountable as it once seemed. In page loading tests, Opera is a very strong performer along with Safari; and in CSS rendering, Opera is the absolute champion. In fact, as our complete charts will now show you, the just-released RTM version of Opera 10 scores better at CSS rendering on Vista (!) than any other browser on any other platform, including XP and Win7. Vista, the slowest platform of all. We were so baffled by this result that we performed this test multiple times over the weekend, and it’s verified: Opera 10 was twice as fast at rendering on Vista as it was on XP (12.68 for Vista, 6.11 for XP).

What baffled us even more was the performance of the latest Opera 10.1 daily snapshot build under the same conditions: It performed poorly in Vista for the same test (3.89 for Vista, 8.37 for XP), while its score improved in XP.

Opera’s overall scores improved on the CSS rendering battery, in part, because of adjustments we made to this battery in response to some readers’ assertions, which turned out to be correct: Opera was mis-reporting its own rendering times in XP, not being fair to itself. In one of the new test batteries we added for the CRPI — testing simple DHTML geometric plotting — Opera is also an astounding performer, posting a score of 3.83 in XP compared to 2.17 for Firefox 3.6 Alpha 2 and 2.43 for the overall performance champion up to now, Google Chrome 3 (CRPI: 15.27).

Our additions and adjustments proved what our eyes were telling us, that Opera really is the faster renderer…at least most of the time. If only rendering were everything, Opera would still be king.

Meanwhile, Firefox reigns supreme with simpler computational tests. Right now, it’s the champion of the Celtic Kane battery, with Firefox 3.6 Alpha 2 (CRPI: 8.82) posting an 11.21 score there in XP versus Opera 10’s 3.18. The first private builds of Firefox 3.7 Alpha 1 are posting even better scores; but it appears the newest Mozilla browser’s computational scores come at the expense of rendering ability, which may be why 3.7’s overall CRPI comes in just above the RTM version, at 7.36.

It’s in the computational scores where Chrome earns its racing stripes. Right now, Chrome 3’s performance on the SunSpider JavaScript test yields a score of 63.74, with the dev build of Chrome 4 (CRPI: 14.69) not far behind at 62.65. With Firefox 3.6 Alpha 2 posting Mozilla’s best SunSpider score of 32.89, you can see that Chrome truly is almost twice as fast as Firefox in heavy computation.

Where does Safari shine? We wish we could provide an accurate CRPI for Safari’s test builds. The reason we can’t is because Safari testing involves grafting the latest version of the WebKit renderer to the Safari front-end. While doing so dramatically improves the test browser’s rendering scores, it sacrifices much of Safari’s computational scores and all of its AJAX scores, rendering the overall CRPI score meaningless. The test build is useless as an all-around browser.

But the release version of Safari 4 has its strengths, as our added test batteries revealed. When using multiple JavaScript libraries for CSS selectors, Safari 4 (CRPI: 14.42) posted a fantastic 10.56 score in our new SlickSpeed battery on XP SP3, versus 12.43 for both stable and dev builds of Chrome. And Safari would be the page load leader with a 10.94 score on XP, were it not for Chrome 4’s bewilderingly impressive 11.94 score there on Win7. That seemed to be a blip to us too, so again, we re-tested and re-tested and came up with the same results each time.

Our new scoring system makes things a bit flatter and a bit fairer. But thanks to Chrome 3’s continually superior computational scores, it’s still our performance leader. Apple Safari 4 is very close behind, and actually ahead of the Chrome 4 dev platform overall. The minor difference in our scores between the release versions of Chrome and Safari certainly wouldn’t be enough for folks with a Mac at home to decide not to install Safari on their Windows-based platforms at work.

In the coming days, as we test Microsoft’s Office Web Apps for the first time, we’ll report on which browsers run the Technical Preview apps better and faster, and which ones show signs of stress. But the browser development field is a full-scale battle now for performance supremacy, and Betanews will be the one giving you the full-field rundown.

Betanews Comprehensive Relative Performance Index September 18, 2009, broken down by platform

Copyright Betanews, Inc. 2009

Add to digg
Add to Google
Add to Slashdot
Add to Twitter
Add to del.icio.us
Add to Facebook
Add to Technorati


Have something to add? Share it in the comments.

Your email address will not be published. Required fields are marked *