2. @patrickstox
• Product Advisor, Technical SEO, &
Brand Ambassador at Ahrefs
• I wrote for Search Engine Land, now Ahrefs blog
• I speak at some conferences like SMX, Pubcon, TechSEO Boost
• Organizer for the Raleigh SEO Meetup (most successful in US) and
the Beer & SEO Meetup
• We also run a conference, the Raleigh SEO Conference
• Judge: 2017/18/19 US Search Awards, 2017/18/19 UK Search
Awards, 2018/19 Interactive Marketing Awards
• Founder Technical SEO Slack Group
• Moderator /r/TechSEO on Reddit
• Finalist for SEO Speaker of the Year and SEO Contributor of the Year
- 2018 Search Engine Land Awards.
• On a lot of top SEO lists like 140 of Today's Top SEO Experts to
Follow.
• Part of SERoundtable’s Honor an SEO series
Who is Patrick Stox?
10. @patrickstox
Tree Shaking (After)
Loads only what is needed
Eliminates unused code
Smaller Pages
https://stackoverflow.com/questions/45884414/what-is-tree-shaking-
and-why-would-i-need-it
12. @patrickstox
Why Code Splitting
Interactivity
JavaScript competes for the main thread. While a task is running a
page can’t respond to a user input, this is the delay felt by users.
Source: https://web.dev/long-tasks-devtools
17. @patrickstox
Google Recommends Dynamic Rendering
Dynamic rendering means switching between client-side rendered and pre-
rendered content for specific user agents.
Options: Puppeteer, Rendertron, prerender.io
Source Google: https://developers.google.com/search/docs/guides/dynamic-
rendering
19. @patrickstox
Don’t Abuse It, Cloaking
Using dynamic rendering to serve completely different
content to users and crawlers can be considered cloaking
Cloaking refers to the practice of presenting different
content or URLs to human users and search engines.
Cloaking is considered a violation of
Google’s Webmaster Guidelines because it provides our
users with different results than they expected.
https://support.google.com/webmasters/answer/66355?hl
=en
20. @patrickstox
Other Search Engines
Bing has the capability to render JS, but the scale is unknown.
https://blogs.bing.com/webmaster/october-2019/The-new-evergreen-Bingbot-
simplifying-SEO-by-leveraging-Microsoft-Edge/ They mostly use this for top
pages and web spam.
Yandex, Baidu = limited
Other search engines, probably not.
21. @patrickstox
Scaling Rendering Is Expensive
Rendering basically needs to behave like a browser, that’s both complicated and
expensive. Crawl costs go up by about 20x when you start rendering.
Gets interesting at scale, trillions of pages in the index. Mainly because of 1)
fetching the content and then 2) running the javascript, its a lot of new logic
Javascript – good news is they’re running Chrome so the environment is good.
Bad thing is theres a lot of js and they need to run a lot of it. Google is
constrained with CPU globally
https://www.jackiecchu.com/seo/google-webmaster-conference-mountain-view-
2019/
23. @patrickstox
Testing
Don’t use Google’s cache. That’s an HTML snapshot processed by your browser.
Don’t use view-source, that’s the HTML.
Do use Mobile Friendly Test https://search.google.com/test/mobile-friendly
Do use URL Inspection Tool https://search.google.com/search-console
-Show loaded/blocked resources, console output and exceptions, rendered DOM
Do use Rich Results (desktop+mobile): https://search.google.com/test/rich-
results
Google search: site:whatever.com "part of your text“ to check if text is seen
24. @patrickstox
You May See Another Page/Domain Indexed
If you’re using an app shell model pages may be detected as duplicate content
and put into the same cluster of pages with the wrong one shown.
Basically the html looked the same as something else so they figured the pages
were duplicate and only wanted 1 record in their index. This should resolve once
it’s been through the renderer.
25. @patrickstox
Need To Know About Googlebot
Declines user permission requests
Stateless, doesn't navigate
• Local Storage and Session Storage data are cleared across page loads.
• HTTP Cookies are cleared across page loads.
Use feature detection to identify supported APIs and capabilities
Make sure your web components are search-friendly:
• To encapsulate and hide implementation details, use shadow DOM.
• Put your content into light DOM whenever possible.
https://developers.google.com/search/docs/guides/fix-search-javascript
26. @patrickstox
Need To Know About Googlebot
Will hit APIs if it’s allowed, potentially a lot
They need to access resources (like JavaScript), don’t block them
Between the initial snapshot and the rendered version they will choose the most
restrictive statements (nofollow vs follow, noindex vs index, etc)
Some pages take longer to be processed than others:
https://webmasters.googleblog.com/2017/01/what-crawl-budget-means-for-
googlebot.html
27. @patrickstox
Need To Know About Googlebot
Internal links may not be picked up and added to crawl before the render
happens
Mobile First: https://webmasters.googleblog.com/2018/03/rolling-out-mobile-first-
indexing.html
28. @patrickstox
Need To Know About Googlebot
Mostly crawls from West Coast US (Mountain View). Some crawling
internationally
They are very aggressive with caching everything (you may want to use file
versioning). This can lead to some impossible states being indexed if parts of old
files are cached.
They download pages and download resources, but all of this is stored and run
as fast as possible as “rendering”.
30. @patrickstox
Need To Know About Googlebot
Googlebot renders with a long viewport. Mobile screen size is
431 X 731 and Google resizes to 12,140 pixels high, while the
desktop version is 768 X 1024 but Google only resizes to 9,307
pixels high.
Credit to JR Oakes @jroakes https://codeseo.io/console-log-
hacking-for-googlebot/
31. @patrickstox
Best Practices
Don't use hashes # for routing
Load content by default without needing an action like click, mouseover, scroll
Make sure links are links: <a href=“/good-link”>Will be crawled</a>
• No
32. @patrickstox
Best Practice: Clean URLs
Change URLs for different content: History API and HTML5 pushstate()
Use your router:
None of this: ?Topics%5B0%5D%5B0%5D=cat.topic%3Ainfrastructure
A JavaScript router is what allows state changes and you can use it to have
clean URLs.
33. @patrickstox
Best Practice: Use a 404 When Possible
JS can’t throw a 404, but you have options:
JS redirect or route to 404 page on a server with an actual 404 response.
Not as great: You can add a noindex to any error pages along with a message
like “404 Page Not Found”. Will be treated as a soft-404 even with status code
200 shown.
Lots of analytics may fire and lots of SEO tools + other tools don’t look for soft-
404s, so not having this status code when looking at data causes issues.
34. @patrickstox
As an SEO
Almost any setup is going to have a version of meta tag module or Helmet or
something similar.
Think of these as your “SEO plugin”. They let you set the titles, descriptions,
canonicals, etc.
35. @patrickstox
Resources
SEO Mythbusting Series + JS SEO Series
https://www.youtube.com/channel/UCWf2ZlNsCGDS89VBF_awNvA
JS SEO Basics: https://developers.google.com/search/docs/guides/javascript-
seo-basics
Dynamic Rendering: https://developers.google.com/search/docs/guides/dynamic-
rendering
Mobile Friendly Test: https://search.google.com/test/mobile-friendly
Google Search Console: https://search.google.com/search-console
JavaScript Working Group: https://groups.google.com/forum/#!forum/js-sites-wg
36. @patrickstox
Bonus
Let’s talk about Service Workers, especially Edge Workers / Serverless
Functions / Service Workers at the Edge / Cloudflare Workers