Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Holistic JavaScript Performance

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Wird geladen in …3
×

Hier ansehen

1 von 37 Anzeige

Weitere Verwandte Inhalte

Diashows für Sie (20)

Anzeige

Ähnlich wie Holistic JavaScript Performance (20)

Weitere von jeresig (20)

Anzeige

Aktuellste (20)

Holistic JavaScript Performance

  1. 1. HOLISTIC PERFORMANCE John Resig
  2. 2. Performance Performance analysis is amazingly complex There is no, single, silver-bullet Don’t want to compromise quality in favor of performance Also want to communicate the changes in a realistic way
  3. 3. Analyzing Performance Wall-clock time Time in different browsers CPU consumption Memory consumption Memory leaks Bandwidth consumption Parse time Battery consumption (Mobile!)
  4. 4. Dictionary Lookups in JavaScript An interesting example for looking at performance. Most frequent concern: File Size Many solutions only optimize for file size Disregard parse time, or other performance aspects
  5. 5. Naïve Solution Pull in a raw list of words Push it into an object for fast property lookups Uses a lot of file size Very fast lookups
  6. 6. Trie A compact structure for storing dictionaries Optimizes heavily for file size Can be rather expensive to parse Can also use a lot of memory
  7. 7. File Size of Dictionaries 1100KB 825KB 550KB 275KB 0KB Plain String Binary String Simple Trie Optimized Trie Suffix Trie Succinct Trie Normal Gzipped
  8. 8. Load Speed of Dictionaries Time to load the dictionary once in Node.js on a 2.8 GHz Core i7. 150ms 112.5ms 75ms 37.5ms 0ms Plain String Binary String Hash Trie Succinct Trie
  9. 9. Search Speed of Dictionaries Time to look up one word. 6ms 4.5ms 3ms 1.5ms 0ms Plain String Binary String Hash Trie Succinct Trie Found Missing
  10. 10. Private Memory Usage of Dictionaries After loading the dictionary once. 11MB 8.25MB 5.5MB 2.75MB 0MB Plain String Binary String Hash Trie Succinct Trie
  11. 11. dynaTrace
  12. 12. dynaTrace One of the best tools available for analyzing the full browser stack Dig into CPU usage, bandwidth usage, and even performance of browser-internal methods Works in both IE and Firefox
  13. 13. Practical Performance Think about the larger context Pre-optimization is dangerous Code quality Importance Cross-browser compatibility
  14. 14. Performance in the jQuery Project
  15. 15. Rule 1: Prove it.
  16. 16. Prove it. Any proposed performance optimization must be undisputedly proven. Show us the proposed changes and how it’ll affect performance across all platforms. How? JSPerf. http://jsperf.com/
  17. 17. JSPerf JSPerf is a great tool Makes it very easy to build a reproducible test: http://jsperf.com/valhooks-vs-val/2
  18. 18. JSPerf JSPerf builds on some of the earlier analysis I did in 2008 http://ejohn.org/blog/javascript-benchmark-quality/ Runs tests the maximum number of times in 5 seconds Even does optimization to make sure there is less loop overhead Also uses a Java Applet for even better timer accuracy
  19. 19. Rule 2: See the Big Picture.
  20. 20. See the Big Picture. Micro-optimizations are death. Doesn’t matter how much you unroll a loop if that loop is doing DOM manipulation. Most crippling web app performance is from DOM performance issues. Pure JS performance is rarely an issue.
  21. 21. Prove the use case. If you’re proposing an optimization you must prove what it’ll help. Show real world applications that’ll benefit from the change. This is especially important as it’ll help stop you from wasting time on performance issues that don’t matter.
  22. 22. Rule 3: Clean Code.
  23. 23. Clean Code. We won’t compromise our code quality in exchange for performance. Almost all code quality compromises come from needless micro-optimizations. ~~(1 * string) vs. parseInt( string ) +new Date vs. (new Date).getTime() Don’t even get me started on loop unrolling.
  24. 24. Rule 4: Don’t Slow IE.
  25. 25. Don’t Slow IE. Just because performance gets better in one browser doesn’t mean it’ll get faster in all browsers. You shouldn’t compromise performance in other browsers for the sake of one. (Unless that browser is IE, always improve IE performance.)
  26. 26. Communicating the Results Creating realistic tests Communicating in an effective manner
  27. 27. Creating Realistic Tests
  28. 28. Realism It’s incredibly hard to create realistic test cases It’s important to look at actual applications We frequently use Google Code Search to find out how people are using our APIs (This gives us the knowledge that we need when we want to deprecate an API as well.)
  29. 29. Communicating the Results
  30. 30. Browserscope Collection of performance results Organized by browser JSPerf plugs right in
  31. 31. Creating Results Pull the results directly from BrowserScope Best: Compare old versions to new versions Within the context of all browsers
  32. 32. .val() (get) (Number of test iterations, higher is better.) 700000 525000 350000 175000 0 Chrome 11 Safari 5 Firefox 4 Opera 11 IE 7 IE 8 IE 9 1.5.2 1.6
  33. 33. Competition You might be inclined to compare performance against other frameworks, libraries, applications, etc. This tends to create more problems than it’s worth And the comparison isn’t always one-to-one If competing, agree on some tests first Work with your competition to create realistic tests
  34. 34. Compete Against Yourself In the jQuery project we work to constantly improve against ourselves Every release we try to have some performance improvements Always compare against our past releases Rewriting API internals is a frequent way of getting good performance results
  35. 35. More Information Thank you! http://ajax.dynatrace.com/ajax/en/ http://jsperf.com http://www.browserscope.org http://ejohn.org/blog/javascript-benchmark-quality/ http://ejohn.org/

×