This slide deck was the base that Karel and myself used to deliver a web optimization internal training at REA. Hopefully it can helpful for other people interested in the performance optimization area as it contains lots of pointers to valuable resources where you can learn more about the topic.
I may recommend using a view that allow you to view the Notes as some of the slide have some content included in that view.
2. Response time and human beings
0.1s → illusion of instantaneous response
1s → keeps our flow of though seamless
10s → keeps out attention, just barely.
>10s → we start thinking about other things, making it harder to
get back into our task when the website finally responds
3. Business effects of web
performance
http://www.guypo.com/17-statistics-to-sell-web-performance-optimization/
4. Fast experience – Happy customer
Fastest pit-stop ever was 2.05 seconds
https://www.youtube.com/watch?v=irEJCCq1UoU
5. >20 sec – we would never do that to
you, wouldn't we?
Average is commonly used to measure performance ending in many cases to
misleading reads. Using median and percentiles can bring a different view to the
table.
In the above graph at least 5% of the request fully render over 30s providing
probably a poor experience to consumers. If we were serving for example 100
ppms (pages per minute)→ 5 of those page views will be having a terrible
performance :(
6. Let's see how it can feel if not done
properly
https://www.youtube.com/watch?v=uP1gmdvrwdc
7. Performance metrics
How to measure. Two schools, use both:
Synthetic / Real User Monitoring
For example:
New Relic
WebPage Test
Browser tools
Dynatrace Gomez
GOMEZ
8. What to measure
When is your page loaded and usable?
DOM content loaded?
Visually complete?
Document complete?
OnLoad?
24. More caching
Expires, cache-control, E-tag.
Edge caching vs Browser caching
Cache busting and good patterns for caching
Stale content
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching?hl=en
25. Test driven caching
Using Rspec for our testing:
1. Write your tests
2. Watch them fail
3. Make your Akamai change
4. Deploy to Akamai staging
5. Watch them pass (for Akamai staging)
6. Deploy to Amakai production
7. Watch them pass
8. Win!
26. Optimize images
Right compression
Lazy loading/Below the folder
Sprites
https://developers.google.com/web/fundamentals/media/images/optimize-images-for-performance?hl=en
28. Know your third parties
http://requestmap.webperf.tools/render.php?id=150320_V2_3b40b4504ee22b165e1ec362af5e88d9
29. Tools that can help you
Yslow
http://yslow.org/
Pagespeed
https://developers.google.com/speed/pagespeed/
Sitespeed.io
http://www.sitespeed.io/
30. Forget, almost, everything I said
HTTP2 changes the game :D
Binary
Header compression
Multiplex single TCP connection
Server push into client cache
Undoing HTTP/1.1 best practices: Sharding, image spriting,
resource in-lining and concatenation will no longer be
necessary
http://http2.github.io/faq/
31. Use a performance budget
http://speedcurve.com/site/1/chrome/1/30/39tfnozeq94p1o0hndk1kpbg4vb7cg/
Include it in your CI and reject deployments that break it.
This slide deck tries to paint a general view on Web Performance and hopefully provides a series of links for people that want to learn more about the topic.
It follows this structure:
- Why should we care?
- How can we measure it?
- How can we optimize it?
The provided link has a lot of good reasons about why web performance is important for your business and provides useful data to have a business conversation about the topic.
http://www.guypo.com/17-statistics-to-sell-web-performance-optimization/
How does 2 seconds feel like, check the video.
https://www.youtube.com/watch?v=irEJCCq1UoU
Ok, we may be aiming to those 2 seconds but are we measuring the right thing?
Let's see how it feels when we don't pay enough attention or don't achieve to optimize our performance.
https://www.youtube.com/watch?v=uP1gmdvrwdc
At this point we explained the different between Synthetic and Real User monitoring, what are they good and bad for and why we can take advantage of using both.
Then we spend quite a bit of time in a deep dive of the tools we use to monitor performance.
DOM content loaded → everything but css, images are loaded
OnLoad: everything, including images are loaded, that's what Google SEO uses
TCP 3-way handshake
Curl and ping examples for SSL and TCP connect times
Show keep alive with curl
In this slides we can see how DNS can affect performance, specially in the initial requests where paralization has still not kicked off and is pushing the full waterfall really badly.
DNS pre-fetching is one of the techniques that we can use to optimize dns request making the browser to do the dns requests as soon as possible and anticipating resources that can be slower to be loaded.
We also briefly mentioned other pre-fetching mechanisms available to us to optimize our performance.
http://www.webpagetest.org/result/150226_FY_88fe5cd29a079191706c94ba5ed89bb0/1/details/
This is an amazing free resource that I can stop recommending to people. Even if you are not interested in networking from a browser perspective this book is a great resource to learn about networking in general.
http://bit.ly/browser-networking
Once the http request hits your servers and the code starts executing there are many elements we can optimize to try to optimize the response.
In general terms you should aim at least to provide your full response in less than 200 ms to avoid a big impact to the overall performance.
Watch carefully your external dependencies (databases, caches, APIs, etc...) as their performance can be a direct contributor to your app performance.
Having an end to end visibility of the performance of your app is critical to be able to improve or troubleshoot problems on it.
The waterfall graphs help us a lot to identify bottlenecks in the rendering of the page. Spending time understanding how they work ends up paying up significantly.
It includes a bad joke about agile and waterfall, sorry I couldn't resist.
I did this free online course about optimizing rendering by focusing in the critical rendering path and I learned so much! Totally worth doing it if you are doing anything related to web performance.
https://www.udacity.com/course/ud884
Again, stupid joke about MVP, but the concept I think is quite clear: do your loading in two steps, one quick one to get the basic and then everything else async.
Really good examples of quick, well optimized pages from some of the well known experts. Reading their code is great to learn how to optimize things and I used as an example of inlining critical assets for that first render.
Performance comparison in
http://speedcurve.com/benchmark/a/a/a/a/30/loaded/qfrcrmxw50s0z2jjlnm53kh28jq2j8/
In this slide we explained some of the advantages of using a CDN, including:
- Shortening the tcp connection
- Having persistence connections between the edge and your origin
- Offloading via caching and delivering faster from a close location.
- Also we talked about caching static and dynamic content, and how not all the content is so dynamic and can probably be cached for a smaller period of time.
This is a good one to see the effect of caching in a web request, making the second, third, etc... views much faster than the first hit.
Only third party components that are not cacheable will be reloaded.
How can you control your caching. We explained the effect of the different http headers and how can we control different caches through them.
Choosing a good schema for your files, my favourite is using sha as part of their names normally removes the annoying need to invalidate caches and lots of headaches. Thanks Dan Hall for that chat about caching a few months ago.
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching?hl=en
This is a concept we have been implementing recently and is to write test for the cacheability of our components to avoid breaking it over time. Rspec for the win!
When you have a fairly complex page you are probably including many third party components. Understanding the impact they can have to your page is important.
Some tools, for example Dynatrace Gomez, can also help you monitor the performance of your third parties and alert on them, including giving you visibility when they are failing for other sites as well.
http://requestmap.webperf.tools/render.php?id=150320_V2_3b40b4504ee22b165e1ec362af5e88d9
The idea behind a performance budget is to adjust those elements that can affect performance in a page, for example number of dependencies, size of your images, elements in the critical rendering path, etc... to a budget.
My favourite part is to try to agree in a budget beforehand and not trying to optimize the page once it is built.
Speedcurve has started to work in that concept in their dashboards, check it out:
http://speedcurve.com/site/1/chrome/1/30/39tfnozeq94p1o0hndk1kpbg4vb7cg/
Dashboard and reports are probably a well know solution to provide visibility to your performance, use them as much as you can.
I also like how github has an internal view for employees that provides constant visibility of the performance of their pages.