A collection of must read articles highlighting the importance of webperformance for your online business
This edition:
Velocity Europe is less than two weeks away. HTTP Archive Trends: Amazon decreased total payload by almost 15%. But they were the only site that showed improvement. To Load Test or Not to Load Test. How Serve-Side performance affects mobile user experience. Possible Performance Problems with Google+ Extensions.
www.measureworks.nl
www.measurematters.com
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
MeasureMatters #9
1. MEASUREMATTERS
#9
A collection of must read
articles highlighting the
importance of
webperformance for your
online business
2. Web Performance Links
• Velocity Europe – High Performance Berlin!
• HTTP Archive: new code, new charts
• Trendwatching on the HTTP Archive
• To Load Test or Not to Load Test: That is not
the question
• How server-side performance affects mobile
user experience
• Google Chrome Plus Extensions: Plus or
Minus?
3. Velocity Europe –
High Performance
Berlin!
Velocity Europe is less than two weeks away. It’s happening November 8-9 in Berlin at
the Hotel Maritim ProArte. I’ve heard good things about the venue and am excited to
get there and check it out.
This event has been a long time coming. A handful of web performance and
operations savants (including members of the Program Committee) have been
encouraging us for years to bring Velocity to Europe, and now it’s actually happening.
And (drum roll please) the price is only EUR 600 (excl. VAT) if you use the 20% discount
code veu11sts. (And don’t forget about the freeVelocity Online Conference this week
– see more below.)
The Velocity Europe speaker line-up is exceptional. Some highlights include:
Jon Jenkins from Amazon.com is talking about their approach to the challenges of
mobile browsing. Jon is the Director of Software Development for Amazon Silk. I’m
looking forward to more details about Silk’s split architecture.
Tim Morrow delivers the background for Betfair’s promise to deliver a fast experience
to their customers, and their progress on that promise.
Theo Schlossnagle is a recognized leader at Velocity. He’s giving two talks on web
operations careers and monitoring.
Estelle Weyl joins Velocity for the first time talking about the nuances of mobile
rendering performance. I learn something new everytime I hear Estelle speak, so am
excited to welcome her to Velocity.
Ivo Teel discusses the balance we all face between features and performance and
how they’re handling that at Spil Games.
Jeff Veen knows the importance of 3rd party performance and availability as the CEO
of Typekit. Jeff’s an amazing, engaging speaker. Reading his session description gave
me goosebumps with anticipation: Jeff sat on a couch in the Typekit offices, staring
out the window, and wondering if everything their company had been working
towards was about to slip through their fingers…
There’s much much more – lightning demos, browser vendor talks, Allspaw on
anticipating failure,Mandelin on JavaScript performance – I’ve got to stop here but
please check out the entire schedule.
I want to give a shout out to the Velocity Europe Program Committee: Patrick
Debois, Aaron Peters,Schlomo Schapiro, Jeroen Tjepkema, and Sean Treadway.
They’ve participated in numerous video concalls (yay Google Hangouts!) to review
proposals, build the program, and shape Velocity to be a European conference. And
they might have one more card up their sleeve – more on that later.
You can get a free warm-up for Velocity Europe at the Velocity Online
Conference this week. It’s Wednesday October 26 9-11:30am PDT. John Allspaw,
Velocity co-chair, has rounded up four speakers to cover several hot topics including
monitoring, global DNS, and making yourself even more awesome(!). It’s free, but you
have to register for Velocity OLC if you want to get in on the conversation.
If you’re heading to Berlin you should also check out CounchConf Berlin on Nov 7.
NoSQL has great performance benefits and Couchbase is a good choice for many
mobile apps. Usecouchconf_discount for 10% off registration.
The last time I was in Berlin was for JSConf.eu 2009. The city had a high tech vibe and
the crowd was extremely knowledgeable and enthusiastic. I’m excited to get back to
Berlin for Velocity Europe and do the web performance and operations deep dives
that are the core of Velocity. If you want to have a website that’s always fast and
always up, Velocity Europe is the place to be. I hope to see you there.
Source: Steve Souders (blog), Posted: oktober 24, 2011
http://www.stevesouders.com/blog/2011/10/24/velocity-europe-high-
performance-berlin/
4. HTTP Archive: new
code, new charts
The HTTP Archive is a permanent record of web performance
information started in October 2010. The world’s top 17,000 web pages
are analyzed twice each month to collect information such as
thenumber and size of HTTP requests, whether responses
are cacheable, the percent of pages witherrors, and the average Page
Speed score. The code is open source and all the data
is downloadable.
The next big step is to increase the number of URLs to 1 million. The
biggest task to get to this point is improving the database schema and
caching. This past week I made some significant code contributions
around caching aggregate stats across all the web sites. Even with only
17K URLs the speed improvement for generating charts is noticeable.
The new stats cache allows me to aggregate more data than
before, so I was able to add several trending charts. (The
increases/decreases are Nov 15 2010 to Oct 15 2011.)
percent of sites using Google Libraries API – up 6%
percent of sites using Flash – down 2%
percent of responses with caching headers – up 4%
percent of requests made using HTTPS – up 1%
percent of pages with one or more errors – down 2%
percent of pages with one or more redirects – up 7%
Most of the news is good from a performance perspective, except for
the increase in redirects. Here’s the caching headers chart as an
example:
Source: stevesouders.com/blog, Steve Souders, Posted: oktober 21, 2011
http://www.stevesouders.com/blog/2011/10/20/http-archive-new-code-new-
charts/
5. HTTP Archive: new
code, new charts
Most of the news is good from a performance perspective, except for
the increase in redirects. Here’s the caching headers chart as an
example:
I dropped the following charts:
popular JavaScript libraries – I created this chart using handcrafted
regular expressions that attempted to find requests for popular
frameworks such as jQuery and YUI. Those regexes are not always
accurate and are hard to maintain. I recommend people use
the JavaScript Usage Statistics from BuiltWith for this information.
popular web servers – Again, BuiltWith’s Web Server Usage Statistics is a
better reference for this information.
sites with the most (JavaScript | CSS | Images | Flash) – These charts
were interesting, but not that useful.
popular scripts – This was a list of the top 5 most referenced scripts
based on a specific URL. The problem is that the same script can have
a URL that varies based on hostnames, querystring parameters, etc.
The new stats cache is a great step forward. I have a few more big
coding sessions to finish but I hope to get enough done that we can
start increasing the number of URLs in the next run or two. I’ll keep you
posted.
Source: Steve Souders Steve Souders, Posted: oktober 21, 2011
http://www.stevesouders.com/blog/2011/10/20/http-archive-new-code-new-
charts/
6. Trendwatching on
the HTTP Archive
Steve put up a great post about the HTTP Archive last month that I’ve
been meaning to pile onto. As one of the archive’s financial
supporters, Strangeloop is obviously a big fan and I’m always talking it
up with our customers. (I was on the phone last week, pimping our
mobile product with one of my favourite analysts — another data geek
— who didn’t know it existed. She was very interested when I pointed
out the incredibly exciting database we are creating.)
A few trends jumped out at me when I compared the first run in
November 2010 to the latest run September 15, 2011.
As Steve pointed out in his post, payload is going up… and up… and
up.
When I dug into this, I focused my attention on the top 100 sites
because these guys represent my customers and I am very familiar with
them. I wasn’t surprised to see total payload going up by 26% in just
under a year — a pretty amazing number when you think about it.
Images grew by close to 30% and scripts by close to 26%. It is tough to
make pages fast when they grow this quickly.
When I see payload going up, my first instinct is to blame the
unconverted — the big guys who just don’t get it yet. To test my
assumption, I took a look at the players who do really get it. I was
surprised by my findings:
Source: Joshua Bixby , Web Performance Today, Posted: september 30, 2011
http://www.webperformancetoday.com/2011/09/30/trendwatching-the-http-
archive/
7. Trendwatching on
the HTTP Archive
Overall, I was really surprised to see the big guys not practicing what
they preach.
Social content is growing, and Google+ is neck-and-neck with
Facebook.
I was also surprised at the growth in social on the top 100 sites. I was
most surprised by the growth in Google+ and the fact that it is equal to
Facebook. See below:
Popularity of
JavaScript libraries
in 2010:
Popularity of
JavaScript libraries
in 2011:
Twitter has pulled ahead, from 2% to 8%. Facebook has grown from 2%
to 5%. And right out of the blocks, Google+ has surged to a tie with
Facebook. Some people say Google+ is a flash in the pan, others say
it’s a serious contender. I’ll be very interested in seeing where these
numbers are at next year.
Source: Joshua Bixby , Web Performance Today, Posted: september 30, 2011
http://www.webperformancetoday.com/2011/09/30/trendwatching-the-http-
archive/
8. Trendwatching on
the HTTP Archive
On the one hand, Amazon decreased total payload by almost 15%. But
they were the only site that showed improvement. Every other major
player I checked increased their total payload: Google by 34.5%, Gmail
by 25%, Yahoo by 18%, and Microsoft by 30%.
Not surprisingly, the number of requests increased across the board as
well:
Source: Joshua Bixby , Web Performance Today, Posted: september 30, 2011
http://www.webperformancetoday.com/2011/09/30/trendwatching-the-http-
archive/
9. Trendwatching on
the HTTP Archive
1 out of 4 of the top 100 sites
still don’t use cache headers.
This is a core best practice,
but about 1 out of 4 of the
top 100 sites still don’t use it.
This is a humbling reminder
that, despite the great
strides front-end optimization
has made in the past
couple of years,
we can’t assume everyone
is on the same page.
Correlations to render time and load time have inverted.
Both of these sets of graphs intrigued me. It’s interesting to see the
decrease across the board in all of the items as a contributor to render
time. At the same time, we see an increase in correlation to load time.
The fact that these two graphs seem inverted makes me wonder if
there’s a connection between them.
Source: Joshua Bixby , Web Performance Today, Posted: september 30, 2011
http://www.webperformancetoday.com/2011/09/30/trendwatching-the-http-
archive/
10. Trendwatching on
the HTTP Archive
I asked Hooman Beheshti, our VP of Product, about this, and here are
his thoughts:
Round trips correlate to load time a lot more this year, and are in front.
With all the 3rd party and social networking tags, this matches what
we see with our customers. Round trips continue to be a massive
contributor to load time, maybe now more than ever.
Transfer size may be second, which may fool us into thinking we’re
getting things from point A to point B faster, but their impact on total
load time has gone up. So, it may not have as big an impact as
roundtrips, but it matters more now than it did before.
The fact that domains used on a page is a new big-boy contributor to
load time (and leads the charge now in render time) may point to the
fact that, collectively, we may not be doing as well as we thought with
modern browsers and parallelism. And by that, I don’t mean
concurrent connections to the same domain – just concurrent
connections, period. Either that, or the domains-per-page is increasing
(by 30% according to this, and by 20%+ for the top 100) and so is its
impact on performance. Third-party tags could also be a contributor
to this.
That’s all I can think of. I don’t have general theories on why the
numbers are bigger for one and smaller for the other. It’s
interesting, though, that the trend for render and load times
themselves is not a part of the comparison and analysis. It would be
interesting to see if these metrics are going up or down on average.
I had a blast digging into the HTTP Archive, and I strongly encourage
you to do the same, if you haven’t already. And if you have any
theories about my findings, or findings of your own, I’d love to hear
them.
Source: Joshua Bixby , Web Performance Today, Posted: september 30, 2011
http://www.webperformancetoday.com/2011/09/30/trendwatching-the-http-
archive/
11. To Load Test or Not
to Load Test: That is
not the question
There is no doubt that performance is important for your business. If you don’t agree
you should check out what we and others think about the Performance Impact on
Business or remember headlines like these:
Target.com web site was down after promoting a new labels: Article on MSN
Twitter was down and people complaining about it on Facebook: Huffington Post
Article
People stranded on Airports because United had a software issue: NY Time Article
The question therefore is not whether performance is important or not. The question is
how to ensure and verify your application performance is good enough
Use your End-Users as Test Dummies?
In times of tight project schedules and very frequent releases some companies tend to
release new software versions without going through a proper test cycle. Only a few
companies can actually afford this because they have their user’s loyalty regardless
functional or performance regressions (again – only a few companies have that
luxury). If the rest of us were to release projects without proper load testing we would
end up as another headline on the news.
Releasing a new version without proper load testing is therefore not the correct
answer.
Don’t let them tell you that Load Testing is hard
When asking people why they are not performing any load tests you usually hear
things along the following lines
We don’t know how to test realistic user load as we don’t know the use cases nor the
expected load in production
We don’t have the tools, expertise or hardware resources to run large scale load tests
It is too much effort to create and especially maintain testing scripts
Commercial tools are expensive and sit too long on the shelf between test cycles
We don’t get actionable results for our developers
If you are the business owner or member of a performance team you should not
accept answers like this. Let me share my opinion in order for you to counter some of
these arguments in your quest of achieving better application performance.
Answer to: What is Realistic User Load and Use Cases
Indeed it is not easy to know what realistic user load and use cases are if you are
about to launch a new website or service. In this case you need to make sure to do
enough research on how your new service will be used once launched. Factor in how
much money you spend in promotions and what conversion rate you expect. This will
allow you to estimate peak loads.
Learn from your Real Users
It’s going to be easier when you launch an update to an existing site. I am sure you
use something like Google Analytics, Omniture, or dynaTrace UEM to monitor your end
users. If so, you have a good understanding on current transaction volume. Factor in
the new features and how many new users you want to attract. Also factor in any
promotions you are about to do. Talk with your Marketing folks – they are going to
spend a lot of money and you don’t want your system to go down and all the money
wasted. Also analyze your Web server logs as they can give you even more valuable
information regarding request volume. Combining all this data allows you to answer
the following questions:
What are my main landing pages I need to test? What’s the peak load and what is
the current and expected Page Load Time?
What are the typical click paths through the application? Do we have common click
scenarios that we can model into a user type?
Where are my users located on the world map and what browsers do they use? What
are the main browser/location combinations we need to test?
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
12. To Load Test or Not
to Load Test: That is
not the question
The following screenshots give you some examples on how we can
extract data from services such as Google Analytics or dynaTrace UEM
to better understand how to create realistic tests:
What are the top Landing Pages, the load behavior and page load
performance? Testing these pages is essential as it impacts whether a
user stays or leaves the web site.
Browser and Bandwidth information allows us to do more realistic tests
as these factors impact page load time significantly
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
13. To Load Test or Not
to Load Test: That is
not the question
Analyzing click sequences of real users allows us to model load test scripts that reflect real user behavior
CDN, Proxies, Latency: There is more than meets the eye
What we also learn from our Real Users is that not every request makes it to our application
environment. Between the End User and the Application we have different components that
participate and impact load times: Connection Speed, Browser
Characteristics, Latency, Content Delivery Network or Geo Location. A user in the United States
on Broad Band will experience a different page load time then a user on a mobile device in
Europe that are both accessing an application hosted in the US. To execute tests that take this
into consideration you would actually need to execute your load from different locations in the
world using different connection speed and different devices. Some Cloud based Testing
Services offer this type of testing by executing load from different data centers or even real
browsers located around the globe. One example is Gomez First Mile Testing.
Answer to: We don’t have the tools or the expertise
This is a fair point. As Load Testing is usually not done on a day-to-day basis as it is hard to justify
the costs for commercial tools, for hardware resources to simulate the load or for people that
need constant training on tools they hardly use.
All these challenges are addressed by a new type of Load Testing: Load Testing done from the
Cloud offered as a Service. The benefits of Cloud based Load Testing are
Cost Control: you only pay for the actual load tests – not for the time the software sits on the
shelf
Script generation and maintenance is included in the service and is done by people that do this
all the time
You do not need any hardware resources to generate the load as it is generated by the Service
Provider
Answer to: It’s too much effort to create and maintain scripts
Another very valid claim but typically caused by two facts:
a) Free vs. Commercial Tools: too often free load testing tools are used that offer easy
record/replay but do not offer a good scripting language that makes it easy to customize or
maintain scripts. Commercial Tools put a lot of effort into solving exactly this problem. They are
more expensive but make it easier, saving time.
b) Tools vs. Service: Load Testing Services from the Cloud usually include script generation
and script maintenance done by professionals. This removes the burden from your R&D
organization.
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
14. To Load Test or Not
to Load Test: That is
not the question
Answer to: Commercial Tools are too expensive
A valid argument if you don’t use your load testing tool enough as then
the cost per virtual user hour goes up. An alternative – as you can
probably guess by now – are Cloud Based Load Testing Services that
only charge for the actual Virtual Users and Time executed. Here we
often talk about the cost of a Virtual User Hour. If you know how often
you need to run load tests, how much load you need to execute over
which period of time it will be very easy to calculate the actual cost.
No actionable data after Load Test
Just running a load test and presenting the standard load testing report
to your developers will probably do no good. It’s good to know under
which load your application break – but a developer needs more
information than: We can’t handle more than 100 Virtual Users. With
only this information the developers need to go back to their code, add
log output for later diagnostics into the code and ask the testers to run
the test again as they need more actionable data. This usually leads to
multiple testing cycles, jeopardizes project schedules and also leads to
frustrated developers and testers.
Too many test iterations consume valuable resources and impact our
project schedules
To solve this problem Load Testing should always be combined with an
Application Performance Management Solution that provides
rich, actionably in-depth data for developers to identify and fix
problems without needing to go through extra cycles and in order to
stay within your project schedules.
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
15. To Load Test or Not
to Load Test: That is
not the question
Capturing enough in-depth data eliminates extra test cycles, saves time
and money
The following screenshots show some examples on what data can be
captured to make it very easy for developers to go right to fixing the
problems:
The first one shows a load testing dashboard including load
characteristics, memory consumption, database activity and
performance breakdown into application layers/components:
The dashboard tell us right away whether we have hotspots in
Memory, Database, Exceptions or in one of our application layers
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
16. To Load Test or Not
to Load Test: That is
not the question
In distributed applications it is important to understand which tiers are
contributing to response time and where potential performance and
functional hotspots are:
Analyzing transaction flow makes it easy to pinpoint problematic hosts or services.
Methods executed contributed to errors and bad response time. To
speed up Response Time Hotspot analysis we can first look at the top
contributors …
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
17. To Load Test or Not
to Load Test: That is
not the question
… before analyzing individual transactions that have a problem. As
every single transaction is captured it is possible to analyze transaction
executions including HTTP Parameters, Session Attributes, Method
Argument, Exceptions, Log Messages or SQL Statements making it easy
to pinpoint problems.
Are we on the same page that Load Testing is important?
By now you should have enough arguments to push load testing in your
development organization to ensure that there won’t be any business
impact on new releases. I’ve talked about Cloud-based Load Testing
services multiple times as it comes with all the benefits I explained. I also
know that it is not the answer for every environment as it requires your
application to be accessible from the Web. Opening or tunneling ports
through firewalls or running load tests on the actual production
environment during off-hours are options you have to enable your
application for Cloud-based Load Testing.
One Answer to these Questions: Compuware Gomez 360 Web Load
Testing and dynaTrace
New Combined Gomez and dynaTrace Web Load Testing Solution
provides an answer to all the questions above and even more. Without
going into too much detail I want to list some of the benefits:
Realistic Load Generation using Gomez First Mile to Last Mile Web
Testing
In-Depth Root-Cause Analysis with dynaTrace Test Center Edition
Load Testing is a Service that reduces in-house resource requirements
Keep your costs under control with per Virtual User Hour billing
Works throughout the application lifecycle – from production, to test, to
development
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
18. To Load Test or Not
to Load Test: That is
not the question
Running a Gomez Load Test allows you to execute load from both
Backbone Testing Nodes as well as Real User Browsers located around
the world. Especially the Last Mile is an interesting option as this is the
closest you can get to your real end users. The following screenshot
shows the Response Time Overview during a load test from the different
regions in the world allowing you to see how performance of your
application is perceived in the locations of your real end users:
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
19. To Load Test or Not
to Load Test: That is
not the question
From here it is an easy drill down in dynaTrace to analyze how
increasing load affects performance as well as functional health of the
tested application:
Source Andreas Grabner, DynaTrace Blog, Posted: september 29, 2011
http://blog.dynatrace.com/2011/09/28/to-load-test-or-not-to-load-test-that-is-not-
the-question/
20. How server-side
performance
affects mobile user
experience
Testing mobile web sites on the actual device is still a challenge. While
tools like dynaTrace Ajax Edition make it very easy to get detailed
performance data from desktop browsers, we do not have the same
luxury for mobile.
I was wondering whether desktop tooling can be used for analyzing
and optimizing mobile sites. My idea was to start testing mobile web
sites on desktop browsers. Many websites return mobile content even
when requested by a desktop browsers. For all sites one has control over
it is also possible to override browser checks.
The basic rationale behind this approach is that if something is already
slow in a desktop browser it will not be fast in a mobile browser. Typical
problems patterns can also be more easily analyzed in a desktop
environment than on a mobile device.
I chose the United website inspired by Maximiliano Firtmans talk at
Velocity. I loaded the regular and the mobile site with Firefox and
collected all performance data with dynaTrace. The first interesting fact
was that the mobile site was much slower than the regular site.
Source Alois Reitbauer, DynaTrace Blog, Posted: september 2, 2011
http://blog.dynatrace.com/2011/09/02/how-server-side-performance-affects-
mobile-user-experience/
21. How server-side
performance
affects mobile user
experience
mobile.united.com is slower than united.com
This is quite surprising as the mobile site has way less visual content as
you can see below. So why is the site that slow?
This is quite surprising as the mobile site has way less visual content as
you can see below. So why is the site that slow?
When we look at the timeline we see that the mobile site is only using
one domain while the regular site is using an additional content
domain. So serving everything from one domain has a serious impact
on performance
Source Alois Reitbauer, DynaTrace Blog, Posted: september 2, 2011
http://blog.dynatrace.com/2011/09/02/how-server-side-performance-affects-
mobile-user-experience/
22. How server-side
performance
affects mobile user
experience
I checked the latest result from BrowserScope to see how many
connections mobile browsers can handle. They are using up to 35
connection which is quite a lot. The United mobile ,site does not
leverage this fact for mobile.
Connections per
domain and total for
mobile browsers
Looking at the content reveals two optimization points. First a lot of the
content is images which could be sprited. This would then only block
one connection and additionally also speed up download times. The
second point is that the CSS which is used is huge. A 70k CSS file for a
12k HTML page is quite impressive.
Very large CSS file on mobile.united.com
Source Alois Reitbauer, DynaTrace Blog, Posted: september 2, 2011
http://blog.dynatrace.com/2011/09/02/how-server-side-performance-affects-
mobile-user-experience/
23. How server-side
performance
affects mobile user
experience
While these improvements will make the page faster they are not the
biggest concern. Looking at the requests we can see that there are
several network requests which take longer than 5 seconds. One of
them is the CSS file which is required to layout the page. This means that
the user not see a nicely layouted page within less than 5 seconds (not
taking network transfer time into consideration). So in this case the
server used for the mobile website is the real problem.
Request with very high server times
Conclusion
This example shows that basic analysis of mobile web site performance
can also be done at the desktop. Especially performance issues caused
by slow server-side response times or non-optimized resource delivery
can be found easily. The United example also shows how important
effective server-side performance optimization is in a mobile
environment. When we have to deal with higher latency and smaller
bandwidth we have to optimize server-side delivery to get more
legroom for dealing with slower networks.
Looking at the content delivery chain which start at the end user and
goes all the way back to the server-side it becomes clear that any time
we lose on the server cannot be compensated by upstream
optimization.
Source Alois Reitbauer, DynaTrace Blog, Posted: september 2, 2011
http://blog.dynatrace.com/2011/09/02/how-server-side-performance-affects-
mobile-user-experience/
24. Google Chrome
Plus Extensions:
Plus or Minus?
As anyone who’s written serious software knows,
application performance issues come in all shapes and
sizes, and not necessarily when or where you expect to see
them. It’s one story when a well defined (and hopefully well managed)
development team handles the entire application. The story becomes entirely
different when you’re talking about open source software, open platforms for
the web, or social networks. The more social the modern web environment
becomes for the consumer and developer, the more gotchas the
unsuspecting users are likely to see.
Growing Pains for Google’s Chrome and Google+
As I mentioned in a guest post that ReadWriteWeb published today, Google
Chrome and the Google+ platform seem to be suffering from growing pains.
Chrome has a well-deserved reputation for speed and reliability. Fast forward
to the Google+ social experiment and Chrome’s role as a platform for third
party extensions. Chrome’s supremacy might be quickly compromised by the
―social developers‖ for Google+.
Possible Performance Problems with Google+ Extensions
The new extensions can be developed by practically anybody, from a
seasoned web programmer to a high school kid dreaming about becoming
the next Mark Zuckerberg. The extensions might have performance problems
of their own (read: quickly hacked together code) that affect the perceived
user experience, or they might have some hard-to-anticipate dependencies
on external data streams, web services, or servers and thus indirectly affect
Chrome's overall performance. All of these issues can and do result in
substantially reduced browser performance and a degraded user experience.
And that takes some of the shine off the Chrome brand.
What Do You Think? Get Involved in the Discussion...
We are well known in the business of helping developers produce high-quality
applications and extensions with the least effort. Please share with us and the
community what kind of problems you face in developing the Google+
ecosystem, what tools you use now, and what tools you think you might need
in the future.
Have you ever built a Chrome extension? How do you recommend testing it?
Source Sergei Sokolov , SmartBear Blog, Posted: september 11,2011
http://blog.smartbear.com/post/11-08-03/Google-Chrome-Plus-Extensions-Plus-or-
Minus.aspx
25. Google Chrome
Plus Extensions:
Plus or Minus?
As anyone who’s written serious software knows,
application performance issues come in all shapes and
sizes, and not necessarily when or where you expect to see
them. It’s one story when a well defined (and hopefully well managed)
development team handles the entire application. The story becomes entirely
different when you’re talking about open source software, open platforms for
the web, or social networks. The more social the modern web environment
becomes for the consumer and developer, the more gotchas the
unsuspecting users are likely to see.
Growing Pains for Google’s Chrome and Google+
As I mentioned in a guest post that ReadWriteWeb published today, Google
Chrome and the Google+ platform seem to be suffering from growing pains.
Chrome has a well-deserved reputation for speed and reliability. Fast forward
to the Google+ social experiment and Chrome’s role as a platform for third
party extensions. Chrome’s supremacy might be quickly compromised by the
―social developers‖ for Google+.
Possible Performance Problems with Google+ Extensions
The new extensions can be developed by practically anybody, from a
seasoned web programmer to a high school kid dreaming about becoming
the next Mark Zuckerberg. The extensions might have performance problems
of their own (read: quickly hacked together code) that affect the perceived
user experience, or they might have some hard-to-anticipate dependencies
on external data streams, web services, or servers and thus indirectly affect
Chrome's overall performance. All of these issues can and do result in
substantially reduced browser performance and a degraded user experience.
And that takes some of the shine off the Chrome brand.
What Do You Think? Get Involved in the Discussion...
We are well known in the business of helping developers produce high-quality
applications and extensions with the least effort. Please share with us and the
community what kind of problems you face in developing the Google+
ecosystem, what tools you use now, and what tools you think you might need
in the future.
Have you ever built a Chrome extension? How do you recommend testing it?
Source Sergei Sokolov , SmartBear Blog, Posted: september 11,2011
http://blog.smartbear.com/post/11-08-03/Google-Chrome-Plus-Extensions-Plus-or-
Minus.aspx
26. MeasureMatters
MeasureMatters is the blog of MeasureWorks focusing
on the importance of web performance optimization
for your online revenue.
About MeasureWorks
MeasureWorks provides Web Performance Optimization.
MeasureWorks measures, analyzes and improves the quality of
experience of online retailers, travel agencies, financial institutions and
other online services. We observe their visitors closely, find out how they
perceive quality, and identify which areas require improvements. We
enable our clients to create loyal customers by improving their services
proactively from a business perspective.
Do you want to maximize revenue and drive customer loyalty by
identifying problems before customers do? And quantify how web
performance impacts your business results? Please view
www.measureworks.nl to see what we can do for you!