Performance of Web applications on client machines. This paper refers to the performance of Web browsers and applications in general and of JavaScript code on the client machine.
Exploring the Future Potential of AI-Enabled Smartphone Processors
Performance Of Web Applications On Client Machines
1. Performance of Web applications on client machines
Bogdan Țicău, Marius-Andrei Cureleț
Abstract. This paper refers to the performance of Web browsers and
applications in general and of JavaScript code on the client machine. Different
JavaScript engines for browsers lead to different performances regarding the
execution of JavaScript code, the main bottleneck on the client machine, thus
the making of a benchmark of the most popular browsers was necessary.
Improving your Web application in general means faster load times and
computations for the client machine and relates to the improvement of the
JavaScript code and the usage of a proper library to ease the programmer‟s job.
Certain tools can be used to profile your Web application and discover the
flaws in design and bugs in code.
Keywords: JavaScript, jQuery, tools, browser.
1 Introduction
In today‟s world of software development, applications are rapidly produced.
Clients and employers are looking for individuals who can build up applications very
fast just focusing on getting them live as soon as possible. This leads to neglecting the
application management altogether at which point clients start to lose users and
business.
The code which makes up an application is not the main focus; it can virtually be
done by anyone with some knowledge and experience. Improving the performance of
an application, especially one that was put together rapidly, can be quite risky
business and can cause many negative effects if done improperly. Because of that
planning this stage will help you avoid horrible results.
JavaScript is a collection of short little scripts that are used to manipulate the DOM
of a Web page. What kind of performance problems could these short little scripts
really present? Because JavaScript is used to give “life” to a Web page with the help
of AJAX and as AJAX uses the network we would think that the client server
communication would be the issue. But as we will see most of the performance
problems will be in the client code.
2. 2 Bogdan Țicău, Marius-Andrei Cureleț
There are many JavaScript engines available out there, each one different as
implementation and having much weaker specifications than other languages. And
thus problems surface in the form of different possible execution profiles for the same
piece of JavaScript code. The good thing is that only a couple of browsers
(implementations) out there are used and we don‟t have to look and understand all of
them.
Being at the height of the Web 2.0 era everyone is expected to blog or post in
forums about problems they encountered with their JavaScript code. Being a
light-weight scripting language, almost anyone who likes programming will start
developing their own scripts and thus come across problems inherent with the
limitations of the language. Network performance is one of the biggest problems and
a lot of people talk about it, then execution performance (the CPU utilization) and a
couple of people complain about memory utilization. The first thing that will be
obvious to anyone who‟s a programmer and wants to learn all that can be learned
about JavaScript and how to improve their applications is the very small number of
benchmarks that are out there.
After you write your JavaScript you can benchmark it on the Web, but as the world
of Web revolves around caching you must consider how that benchmark uses the
cache. Often the caching of Web pages is very well hidden and when a benchmark
says that A is better than B with no obvious reason why then you know that
something is wrong, you can‟t fully trust that benchmark.[1]
2 Benchmarking the browsers
A better JavaScript Engine hit the Web with the release of the Google Chrome
browser: the V8 engine. As we said before there are a lot of JavaScript engines out
there that are constantly improved being used actively in browsers:[2]
1. JavaScriptCore: the engine behind Safari/WebKit(up until Safari 3.1).
2. SquirrelFish: the engine used in Safari 4.0.
3. V8: the engine behind Google Chrome.
4. SpiderMonkey: the engine used by Firefox(up to and including Firefox 3.0).
5. TraceMonkey: the engine used in Firefox from version 3.1.
6. Futhark: the engine used by Opera 9.5 and newer versions.
7. IE Jscript: the engine behind Internet Explorer.
The most popular of the above browsers have been tested using benchmarks and
you can see the tests that have been made and how that reflects the actual Web
application performance. We used only the first benchmark of the following:
1. SunSpider: The popular JavaScript performance test suite released by the WebKit
team. Tests only the performance of the JavaScript engine (no rendering or DOM
manipulation). Has a wide variety of tests (objects, function calls, math, recursion,
etc.)[3]
3. Performance of Web applications on client machines 3
2. V8 Benchmark: A benchmark built by the V8 team, only tests JavaScript
performance - with a heavy emphasis on testing the performance of recursion.
3. Dromaeo: A test suite built by Mozilla, tests JavaScript, DOM, and JavaScript
Library performance. Has a wide variety of tests, with the majority of time spent
analyzing DOM and JavaScript library performance.
The table below shows the actual data from the SunSpider benchmark and we can
see each of the four browsers tested and the results obtained:
Google
Mozilla Opera Microsoft IE
Chrome
Firefox 3.5.3 10.1750 8.0.7600
3.0.195.27
3D 182.2 93.4 413.6 679.6
access 167 50.4 606.4 973.4
bitops 52.6 56.4 496.2 762
controlflow 45.6 3.8 59.4 154
crypto 75.8 49.4 239.6 439.4
date 209.6 71 268.8 497.2
math 83.4 61.4 316.2 620.2
regexp 116 19.2 119.4 227.8
string 441 212 1008.8 1064.8
Table 1. SunSpider benchmark numbers
Then numbers mean the number of milliseconds necesarry to finish the test suite.
Now all of this put in a table:
4. 4 Bogdan Țicău, Marius-Andrei Cureleț
1064.8
string 1008.8
212
441
227.8
regexp 119.4
19.2
116
620.2
math 316.2 Microsoft IE 8.0.7600
61.4
83.4
497.2
date 268.8
71 Opera 10.1750
209.6
439.4
crypto 239.6
49.4 Google Chrome
75.8
154 3.0.195.27
controlflow 59.4
3.8
45.6 Mozilla Firefox 3.5.3
762
bitops 496.2
56.4
52.6
973.4
access 606.4
50.4
167
679.6
3D 413.6
93.4
182.2
0 200 400 600 800 1000 1200
Table 2. SunSpider benchmark results chart
3 Improving your Web application
There are probably millions of ways to improve the performance of your new Web
application. The main areas you can work on improving are the hardware (the Web
server itself), optimizing server-side scripting and front-end performance. The last
one is the easiest to focus on and provides you with instant results from your
work.[5],[9]
3.1 Why focus on front-end performance?
The front-end is the most accessible part of a Website. Root access to your server
requires specialized knowledge and might not even be possible. Another advantage to
improving on front-end performance is cost. The only thing required for it is time,
your time and since this reverberates into application response time it‟s time well
spent. With that in mind, let‟s get to some specific Web application improvements.
5. Performance of Web applications on client machines 5
3.1.1 Profiling your Webpage to sort out unneeded components
Fig. 1. Firebug extension for Firefox
It‟s always helpful to profile your Web page to find components that you don‟t
need or components that can be optimized. Profiling a Web page usually involves
using a tool like Firebug to determine which components like images, CSS files,
HTML documents and JavaScript files are being requested by the user, how long
the component takes to load, and how big it is from a memory standpoint. A general
rule of thumb is that you should keep your page components as small as possible,
usually 25kb is a good reference, since it‟s the cache limit for objects on the Iphone.
Firebug‟s Net tab can help you hunt down huge files that bog down your Website.
It gives you a breakdown of all the components required to render a Web page
including: what it is, where it is, how big it is, and how long it took to load.
6. 6 Bogdan Țicău, Marius-Andrei Cureleț
3.1.2 Use images in the right format to reduce their file size.
Fig. 2. Proper way to save images
If you have a lot of images, it‟s essential to learn about the optimal format for each
image. There are three common Web image file formats: JPEG, GIF, and PNG. In
general, you should use JPEG for realistic photos with smooth gradients and color
tones. You should use GIF or PNG for images that have solid colors (such as charts
and logos). GIF and PNG are similar, but PNG typically produces a lower file size.
3.1.3 Minify your CSS and JavaScript documents
Minification is the process of removing unneeded characters such as tabs, spaces,
source code comments from the source code to reduce its file size. For example:
This piece of CSS:
.some-class {
color: #ffffff;
line-height: 20px;
font-size: 9px;
}
can be converted to:
.some-class{color:#fff;line-height:20px;font-size:9px;}
You don‟t have to do this code reformatting manually. There is a plethora of tools
online that can help you minify your CSS and JavaScript files. For JavaScript, some
popular minification options are JSMIN, YUI Compressor, and JavaScript Code
Improver. A good minifying application gives you the ability to reverse the
minification for when you‟re in development. Alternatively, you can use an in-
browser tool like Firebug to see the formatted version of your code.
7. Performance of Web applications on client machines 7
3.1.4 Combining CSS and JavaScript to reduce HTTP requests
For every component needed to render a Web page, an HTTP Request is created to
the server. So, if you have five CSS files for a Web page, you would need at least five
separate HTTP GET requests for that particular Web page. By combining files, you
reduce the HTTP request overhead required to generate a Web page.
3.1.5 Use CSS sprites to reduce HTTP requests
Fig. 3. CSS Sprite from Amazon
A CSS Sprite is a combination of smaller images into one big image. To display
the correct image, you adjust the background-position CSS attribute.
Combining multiple images in this way reduces HTTP requests.
You can do this manually, but there‟s a Web-based tool called CSS Sprite
Generator that gives you the option of uploading images to be combined into one CSS
sprite, and then outputs the CSS code (the background-position attributes) to
render the images.
3.1.6 Offload site assets and features
Unloading some of your site assets and features to third-party Web services greatly
reduces the work of your Web server. The principle of offloading site assets and
features is that you share the burden of serving page components with another server.
8. 8 Bogdan Țicău, Marius-Andrei Cureleț
You can use Feedburner to handle your RSS feeds, Flickr to serve your images,
and the Google AJAX Libraries API to serve popular JavaScript frameworks/libraries
like MooTools, jQuery and Dojo. Not only are these solutions cost-effective, but they
drastically reduce the response times of Web pages.
One thing to remember here, if you want to exploit the commercial aspect of your
Website, is to very carefully read the license agreement you sign when you use these
servers and their services. Be careful what rights you are giving out, there are many
Websites out there that have a point in the license agreement, sometimes very well
hidden, that give them the right to take what is rightfully yours and use it at their own
free will and benefit.
3.1.7 HTTP Compression
HTTP Compression is used to compress contents from the Web server. HTTP
requests and responses could be compressed, which can result in great performance
gains. Through HTTP compression, the size of the payload can be reduced by about
50%, which is great.
HTTP Compression is now widely supported by browsers and Web servers. If
HTTP compression is enabled on the Web server, and if the request header includes
an Accept-Encoding: gzip, deflate header, the browser supports gzip
and deflate compression mechanisms, so the response can be compressed in any of
the given formats by the Web server in order to reduce the payload size. This leads to
an increase in performance. Latter that compressed response is decompressed by the
browser and rendered normally.
3.1.8 CSS at Top and JavaScript at Bottom
The recommended approach is to put CSS links on top of the Web page, as it
makes the page render progressively efficient. Since users want to see the contents of
a page whilst it‟s loading rather than white spaces, contents/formats should be given
on top. HTML specifications clearly say to declare style sheets in the head section of
a Web page.
When scripts are defined on top of the page they can take unnecessary time to load;
they don‟t show the contents that users are expecting after making any request to an
HTTP Web server. It's better to display the HTML contents of a page, then load any
scripting code (when possible, of course).
Preferably use/link up JavaScript-based scripts at the bottom of a Web page.
Alternatively you can use the defer attribute, which runs the script at the end of
page loading, but that is not the preferable approach as it is not browser independent.
For example, Firefox doesn‟t support it and could mess up with document.write,
so only use it once you fully understand the implications.
9. Performance of Web applications on client machines 9
3.1.9 Reduce Cookie size
Cookies are stored on the client side to keep information about users
(authentication and personalization). Since HTTP is a stateless protocol, cookies are
common in Web development to maintain information and state. Cookies are sent
with every HTTP requests, so try to keep them low in size to minimize effects on the
HTTP response.
Cookie‟s size should be minimized as much as possible.
Cookies shouldn‟t contain secret information. If really needed, that information
should be either encrypted or encoded.
Try to minimize the number of cookies by removing unnecessary cookies.
Cookies should expire as soon as they become useless for an application.
3.1.10 Use Cache appropriately
Cache mechanism is a great way to save server round trips - and also database
server round trips - as both round trips are expensive processes. By caching data we
can avoid hitting them when unnecessary. Following are few guidelines for
implementing caching:
Static contents should be cached, like “Contact us” and “About us” pages, and such
other pages which contain static information.
If a page is not fully static, it contains some dynamic information. Such pages can
leverage the ASP.NET technology, which supports partial page caching.
If data is dynamically accessed and used in Web pages - like data being accessed
from some file or database - and even if data is consistently or regularly changed, then
that data could be cached by using ASP.NET 2.0 cache dependency features. As soon
as data changes from the back-end by some other means, the cache would be updated.
Now that Web technologies such ASP.NET have matured and offer such great
caching capabilities, there's really no reason not to make extensive use of them.
3.2 Improving JavaScript programs
The most common ways of using JavaScript require no optimization at all.
However when you start creating complex applications in JavaScript you will hit
some walls rather quickly. Fortunately the code you are currently writing can be
accelerated substantially.[6],[7]
3.2.1 Analyzing performance
Before you attempt to make any modifications and tweaks, make sure to use the
Firebug console.profile() and console.profileEnd(). Test results will vary substantially
during subsequent tests, but they serve their purpose for finding the bottlenecks.
10. 10 Bogdan Țicău, Marius-Andrei Cureleț
3.2.2 Remove Double $$ and event binding
There are many small differences in performance, but several things are likely to
really kill performance. One of the most important ones to note is using prototype‟s
double dollar ($$) function or the similar Element.select. You can often avoid using
the double dollar function. For example the use case of attaching events to all „report
this‟ buttons on your site. The simple approach would be to use the following code:
$$('.report_this').each(function(report_button) {
var id = report_button.id.split('_')[1];
report_button.observe('click', this.respondToReportB
utton.bind(this, id);
});
Four things are slowing this code down: the usage of the $$ function, the usage of
each instead of a native looping construct, the retrieving of the id from the id string,
the repeated binding of functions.
There are several possible remedies against the above code:
Give all report_this buttons a unique id (say for instance that you have 15 or less in
a list)
Pre generate a list of ids using your server side language of choice and pass it to
JavaScript
Manually traverse the DOM; $(’container’).childNodes can do wonders
Bind once to a common parent element
Find items by name instead of class
Forget about all the initializing and fall back to old school
onclick=”classinstance.respondToReportButton()”
This last option sort of goes against many Web development principles, but is often
a very pragmatic choice.
A better implementation using technique one would be:
this.respondToReportButtonBound = this.respondToReportB
utton.bind(this);
for(x=1;x<16;x++) {
button = $('report_button'+x);
if(!button) break;
button.observe('click', this.respondToReportButtonBo
und);
}
3.2.3 Stalling on writing unneeded code
The trick here is to actually put in a bit of effort to make your code lazy. Don‟t do
anything until it is needed. With the one, but important exception, that doing so would
hurt the user experience. If some items are currently not visible to the user simply
don‟t bind events to them. If you need to extract ID‟s don‟t do so until someone
11. Performance of Web applications on client machines 11
actually clicks on the item in question. Furthermore also make it lazy in the regular
sense of the word. If your code only need to change one item, figure out which one it
is and don‟t loop about changing all just in case. This point is different for every
application, but it can achieve great speed gains for a creative programmer.
3.2.4 Stop using prototype functions they are not needed
Often you do not really need (as in that it barely saves you development time)
some of the functionality of prototype. When comparing the speed of
element.innerHTML = „hello world‟ versus element.update(‟hello world‟) the
differences are substantial (60 times with large chunks of html). Also the each iterator
is often not needed and can be replace by a simple for loop with checks on nodeType
and tagName. The same goes for the templating system. These tools barely save you
time, but really hurt performance. When you really need speed, be sure to refrain from
using prototype libraries though.
3.2.5 Lower level optimizations
When you are done implementing the really important optimizations there are quite
some lower level optimizations which will speed up your code.
Write to innerHTML instead of using document.createElement
Use for loops instead of for in loops
Cache variables and functions
Limit the usage of Eval
Limit the usage of Try Catch statements
3.2.6 Cache your objects
One of the best kept secrets to boosting script performance is to cache your objects.
Often times, your script will repeatedly access a certain object, as in the following
demonstration:
<script type="text/javascript">
for (i=0;i<document.images.length;i++)
document.images[i].src="blank.gif"
</script>
In the above, the object "document.images" is what's accessed multiple times.
The code doing this is inefficient, since the browser must dynamically look up
"document.images" twice during each loop: once to see if i<document.images, and
the other, to access and change the image's src. If you have 10 images on the page, for
example, that's 20 calls to the Images object right there. Excessive calls to JavaScript
objects can wear down the browser, not to mention your computer's memory.
12. 12 Bogdan Țicău, Marius-Andrei Cureleț
The term "cache your object" means storing a repeatedly access object inside a user
defined variable, and using that variable instead in subsequent references to the
object. The performance improvement can be significant. Here's a modified version of
the initial script using object caching:
<script type="text/javascript">
var theimages=document.images
for (i=0;i<theimages.length;i++)
theimages[i].src="blank.gif"
</script>
Not only is the number of times document.images[] is referenced cut in half with
the above, but for each time it is referenced, the browser doesn't have to go through
document.images first, but goes straight to its containing array.
Remember to use object caching when calling highly nested DHTML objects, like
document.all.myobject, or document.layers.firstlayer etc.
3.2.7 Cache your scripts
Once you've cached in your objects, another way to enhance script performance is
to cache the entire script, by including it in a .js file. The technique causes the browser
to load the script in question only once, and recall it from cache should the page be
reloaded or revisited.
<script type="text/javascript"
src="myscript.js"></script>
You should use script caching when a script is extremely large or embedded across
multiple pages.
3.3 Increase JQuery performance
3.3.1 Always use the latest version
Being an open-source JavaScript library, jQuery is in constant development and
improvement, newer versions being available in just a couple of weeks. The creator
and his team are always researching new ways to improve program performance so
it‟s imperative that you always use the latest version. You can do this by using
Google‟s AJAX libraries:
<script type="text/javascript" src="http://www.google.c
om/jsapi"></script>
<script type="text/javascript">
/* load the minified version jQuery v1.3.2 */
google.load ("jquery", "1.3.2", {uncompressed: false});
</script>
13. Performance of Web applications on client machines 13
This would be the method of hard-coding the specific version of jQuery you want
to use, but if you want to automatically reference the most recent version of the
library, the thing we want, we could use instead use 1 in the version place.
<script type="text/javascript" src="http://ajax.googlea
pis.com/ajax/libs/jquery/1/jquery.js"></script>
3.3.2 Combine and minify your scripts
Most browsers cannot process more than one script concurrently so they queue
them up so the load times increase. The majority of Websites use the same scripts on
every page, so you can put them together in a single file and use a compression tool (a
lot of tools for compressing JavaScript and CSS files are available) to minify them.
One file versus many, one small file versus one large one lead to faster load times for
your Website. So through the process of minification you preserve the operational
quality of the code while you reduce its overall size in bytes.
3.3.3 Use For instead of Each
The use of native functions is always faster than any other developed function.
var array = new Array ();
for (var i=0; i<10000; i++) {
array[i] = 0;
}
console.time(‘native');
var length = array.length;
for (var i=0;i<length; i++) {
array[i] = i;
}
console.timeEnd('native');
console.time('helper');
$.each (array, function (i) {
array[i] = i;
});
console.timeEnd('helper');
For the native function it takes 3ms and for the helper (jQuery) function 29ms. So
as you can see the built “each” function from jQuery takes almost ten times as long as
the JavaScript native “for” in the loop. So if you‟re setting CSS attributes or
manipulating DOM elements it is wise to use the faster way.
14. 14 Bogdan Țicău, Marius-Andrei Cureleț
3.3.4 Use ID’s instead of Classes
jQuery uses the browser‟s native method, getElementByID(), to find an object
achieving a very fast query. So if it isn‟t absolutely necessary to use complex
selectors, jQuery doesn‟t fail to provide them by the way, you should write your own
selectors or specify a container for the element you want to select. The following code
creates a list and fills it with items and then selects each item once:
console.time('class');
var list = $('#list');
var items = '<ul>';
for (i=0; i<1000; i++) {
items += '<li class="item' + i + '">item</li>';
}
items += '</ul>';
list.html (items);
for (i=0; i<1000; i++) {
var s = $('.item' + i);
}
console.timeEnd('class');
console.time('id');
var list = $('#list');
var items = '<ul>';
for (i=0; i<1000; i++) {
items += '<li id="item' + i + '">item</li>';
}
items += '</ul>';
list.html (items);
for (i=0; i<1000; i++) {
var s = $('#item' + i);
}
console.timeEnd('id');
Running the above code gives a 5 second delay between the two implementations
of the selection of elements.
15. Performance of Web applications on client machines 15
3.3.5 Use a context for your selectors
jQuery uses the DOM node context and it should be used in conjunction with the
selector to determine the exact query used, thus preventing the traversing of the whole
DOM, this being specified in the jQuery API reference.
$('.class').css ('color' '#111111');
$('.class', '#class-
container').css('color', '#111111');
The second selector is in the form of $(expression, context).
3.3.6 Always use caching
Never use more than once a selector for the same element, especially in a loop, big
programming fault. Use the selector and cache the returned data in a variable so that
DOM doesn‟t uses it‟s time to track your needed elements.
$('#item').css ('color', '#111111');
$('#item').html ('hi');
$('#item').css ('background-color', '#ffffff');
// you could use this instead
$('#item').css ('color', '#111111').html ('hi').css ('b
ackground-color', '#ffffff');
// and even better
var item = $('#item');
item.css ('color', '#111111');
item.html ('hi');
item.css ('background-color', '#ffffff');
// as for loops, this is a big mistake
console.time('no cache');
for (var i=0; i<1000; i++) {
$('#list').append (i);
}
console.timeEnd('no cache');
// look at this
console.time('cache');
var item = $('#list');
16. 16 Bogdan Țicău, Marius-Andrei Cureleț
for (var i=0; i<1000; i++) {
item.append (i);
}
console.timeEnd('cache');
Think when you have a big loop on your hand with a lot of elements that you need
to modify, it would be a performance kill to use many selectors in the loop.
3.3.7 Don’t use DOM manipulation
Using the DOM functions for inserting html into a page is rather time consuming.
Instead of using prepend(), append(), after() you can use the .html() function in
jQuery which is much faster.
3.3.8 Don’t use function for string concatenation
Functions like concat(), join() are slower than the += operator, and shouldn‟t be
used if you want to join together large pieces of text. A study relating to these
functions was made by Tom Trenka and he stated the following:
"The += operator is faster even more than pushing string fragments into an array
and joining them at the last minute" and "An array as a string buffer is more efficient
on all browsers, with the exception of Firefox 2.0.0.14/Windows, than using
String.prototype.concat.apply." - Tom Trenka
3.3.9 Write your functions with return false at the end
When your functions execute and don‟t have return false; at the end they make
your browser jump to the top of the page which can be quite annoying sometimes.
$('#item').click (function () {
// your code
return false;
});
3.3.10 Always have the API reference and most common functions at your
disposal
Have this links open, regarding the most common functions and API reference, so
you can quickly help yourself when writing code using the jQuery library.[4]
17. Performance of Web applications on client machines 17
3.4 Tools that help you write faster applications
Response times, availability and stability are vital factors to bear in mind when
creating and maintaining a Web application. If you‟re concerned about your Web
pages‟ speed or want to make sure you‟re in tip-top shape before starting or launching
a project, you can use a series of tools to help you create and sustain
high-performance Web applications.[8]
3.4.1 Firebug
Firebug is an essential browser-based Web development tool for debugging,
testing and analyzing Web pages. It has a powerful set of utilities to help you
understand and dissect what‟s going on. One of the many notable features is the Net
(“network”) tab where you can inspect HTML, CSS, XHR, JS components.
18. 18 Bogdan Țicău, Marius-Andrei Cureleț
Fig. 4. Firebug and it‟s options
3.4.2 YSlow for Firebug
Fig. 5. YSlow for Firebug
YSlow grades a Website‟s performance based on the best practices for high
performance Web sites on the Yahoo! Developer Network. Each rule is given a letter
grade (A through F) stating how you rank on certain aspects of front-end
performance. It‟s a simple tool for finding things you can work on such as reducing
the number of HTTP request a Web page makes, and compressing external JavaScript
and CSS files.
YSlow works in three phases to generate its results.
1. YSlow crawls the DOM to find all the components (images, scripts, stylesheets,
etc.) in the page. After crawling the DOM, YSlow loops through Firebug's Net
Panel components and adds those to the list of components already found in the
DOM.
2. YSlow gets information about each component: size, whether it was gzipped,
Expires header, etc. YSlow gets this information from Firebug's Net Panel if it's
available. If the component's information is not available from Net Panel (for
example, the component was read from cache or it had a 304 response) YSlow
makes an XMLHttpRequest to fetch the component and track its headers and
other necessary information.
19. Performance of Web applications on client machines 19
3. YSlow takes all this data about the page and generates a grade for each rule, which
produces the overall grade.
3.4.3 Fiddler 2
Fiddler 2 is a browser-based HTTP debugging tool that helps you analyze
incoming and outgoing traffic. It‟s highly customizable and has countless of reporting
and debugging features. The functional uses of Fiddler include how to improve “first-
visit” performance (i.e. unprimed cache), analyzing HTTP response headers and
creating custom flags for potential performance problems.
Fig. 6. Fiddler 2
3.4.4 Cuzillion
Cuzillion is a tool to help you see how page components interact with each other.
The goal here is to help you quickly rapidly check, test, and modify Web pages before
you finalize the structure. It can give you clues on potential trouble-spots or points of
improvements. Cuzillion was created by Steve Saunders, the ex-Chief Performance at
Yahoo!, a leading engineer for the development of Yahoo‟s performance best
practices, and creator of YSlow.
20. 20 Bogdan Țicău, Marius-Andrei Cureleț
Fig. 7. Cuzillion
4 References
1. JavaScript Performance, Published August 2007, Author Kirk Pepperdine
http://www.fasterj.com/articles/javascript.shtml
2. The Great Browser JavaScript Showdown, 19 December
2007http://www.codinghorror.com/blog/archives/001023.html
3. SunSpider JavaScript Benchmark http://www2.Webkit.org/perf/sunspider-
0.9/sunspider.html
4. 10 Ways to Instantly Increase Your jQuery Performance
http://net.tutsplus.com/tutorials/javascript-ajax/10-ways-to-instantly-increase-your-
jquery-performance/
5. Improve Web application performance
http://dotnetslackers.com/articles/aspnet/ImproveWebApplicationPerformance.asp
x
6. Performance tips for JavaScript
http://www.javascriptkit.com/javatutors/efficientjs2.shtml
7. JavaScript optimization http://www.mellowmorning.com/2008/05/18/javascript-
optimization-high-performance-js-apps/
8. Tools to help you design better Web pages
http://sixrevisions.com/tools/faster_Web_page/
9. Ways to improve your Webpage performance http://sixrevisions.com/Web-
development/10-ways-to-improve-your-Web-page-performance/