This is Novos

Home / Blog / JavaScript /

Common JavaScript FAQs for SEO

Dan from NovosDan in JavaScript, SEO, Strategy

3rd April, 2020

What is server side rendering?

Server side rendering is when each page gets requested from the server a site is hosted and, and the required HTML is rendered and returned to the client.

What is client side rendering?

Client side rendering is when a request to the server is not made but the HTML is rendered directly in the browser.

What is dynamic rendering?

Dynamic rendering is when an intermediate party sits between a visitor and your server to detect if requests are from a crawler (such as Googlebot or Bingbot) or from a user. If the request is from a bot it will serve a fully rendered DOM from a cache. If it detects a human visitor, it will serve the normal SPA.

This is a great stepping stone if full server side rendering cannot be achieved straight away with services such as prerender.io, Google Puppeteer, or Rendertron.

Is dynamic rendering cloaking?

Despite dynamic rendering logically seeming like it would fit the bill for cloaking, it is not considered cloaking by Google provided you are serving both Google and the user similar content. If you use dynamic rendering to serve Googlebot completely different content to users, this can still be considered cloaking.

How can you check if your website is server side rendered?

Both server side and client side rendered websites change the URL in the browser, so to check if it’s server side rendered you need to check the header response. The easiest way to do this is to use a plugin such as Redirect Path, or Link Redirect Trace.

When you visit your site you will see that the URL will return a header response in the browser:

client side rendering

If your site it server side rendered, however, when you navigate to a different page, you will see a different URL in the browser, but not in the header response, which will still be the first URL you visited:

client side rendering example 2

This is because the page is loaded on the client side, therefore another response to the server is not made, so the header response also does not change.

How can you check if your JavaScript website can be crawled?

There are a few steps to take to ensure which areas of your JavaScript website can be crawled.

  1. Turn JS off in your browser (there are loads of plugins available to do this easily) to see which areas of your site/links can and can’t be loaded.
  2. Crawl the site using Screaming Frog to see what is and isn’t crawled. Initially you should use the default Text Only crawling. You may also want to ensure you’re crawled with Googlebot as a User Agent as this may change how your website responds if you’re using something such as pre-rendering.
  3. Crawl the site using Screaming Frog with JavaScript rendering (and store the HTML and rendered HTML to allow for comparisons). Links in the HTML can be crawled, links in the rendered HTML can be found only after rendering, and links not present cannot be crawled.

What are the key things to get right when using JavaScript websites?

Every website is different and needs to be treated as such. However, there are some key things to enable if you’re running a JavaScript website:

  1. Ensure every page has an individual URL by utilising the History API.
  2. Ensure your site is server side rendered, or pre-rendered so search engines crawlers can access all URLs.
  3. Internally link to your pages; even if you are using JavaScript to load in page content (ie via modal windows), make sure you are also linking via HTML also.

For a more in depth guide to setting up SEO for popular JavaScript frameworks, check out our Angular SEO guide and React SEO guide.

Since Google announced that they can crawl JavaScript, do I need to worry about my site being found?

Google in 2019 announced that they were running an evergreen version of Chromium, which meant that many more JavaScript features were supported. However, this does not mean everything can be built in JavaScript without SEO considerations.

The aim is to make your website as easy as possible for search engines to find and crawl. The less effort search engines have to put in to find and see your content, the more likely it is to rank well.

Just because they can (or say they can), doesn’t mean they will, or you should.

Similarly, it’s important to remember that not all search engines are as far ahead as Google in terms of JavaScript rendering. If you don’t make your website easy to crawl and content visible without JavaScript you may be losing out on rankings, traffic, conversions and revenue from other search engines such as Bing, the other engines that run Bing (Yahoo, Ecosia etc.), DuckDuckGo etc. Even if this is only 5% of the traffic your site might receive, this can still lead to significant revenue that’s being totally ignored.

There are also some JavaScript features that Google still cannot crawl such as onclick events 
Snipcart have a fantastic guide on how to optimise SEO for SPAs including Angular.

What JavaScript frameworks are there?

Although many different JavaScript frameworks can be used, there are a few that are becoming increasingly more popular such as Angular, React and Vue.

How does Google deal with infinite scroll?

Infinite scroll requires a user to interact with a website to load; the products loaded initially in the rendered DOM do not include those that are loaded later via interaction. Therefore, is using infinite scroll on your site, it’s best to include pagination in your HTML also.

This means that Google can still access all products that are related to a particular category. For more information about pagination and SEO, check out our blog post on pagination for e-commerce sites.

What is a SPA?

An SPA is a Single Page Application (unfortunately not a sauna and steam room); it is a website that loads all resources upon initial load so it doesn’t have to make additional requests to the server when a user interacts with the site.

This makes for a smooth, seamless experience, however, can have implications for SEO as depending on the setup, can make it difficult for search engine crawlers to access the site.

Is JavaScript bad for SEO?

This is a complex question. JavaScript is not necessarily “bad” for SEO as most JS resources can be understood and read by Google. However, relying heavily and only on JavaScript can make it more difficult for Google to find content and understand your site.

Content that is only viewable after JavaScript has rendered requires Google to render the site (a process separate to crawling the raw HTML). This requires more resources from Google and can slow down the speed at which Google finds and understands your site (as it must prioritise the resources it gives). Similarly, it can introduce differences between the raw HTML and the rendered HTML, which means Google has the additional difficulty of understanding the context and signals from your site.

Google is getting increasingly better and better at rendering JavaScript, however, ideally we want to make it as easy as possible for Google to find and understand our content. Using JavaScript creates an additional step which, although not necessarily detrimental, isn’t as easy as possible for Google.

How can I learn more about JavaScript SEO?

Google has a great video resource about JavaScript SEO which we would highly recommend alongside their entire video course if you want to learn more about how to ensure you’re following the basics.

If you’re looking for more in-depth consultancy, then please do get in contact with us, as we have lots of experience optimising websites that use JavaScript.

Does Googlebot execute JavaScript?

Googlebot is Google’s web crawler; this does not execute JavaScript. Google Caffeine is Google’s rendering engine which executes JavaScript.

Google first crawls the raw HTML of a website using Googlebot, and then renders the content using Google Caffeine.

What is the DOM?

The DOM is the Domain Object Model and is a version of the HTML after JavaScript has been rendered. The DOM can be seen in a browser such as Chrome using the Inspect function.

The DOM is different from the raw HTML that can be seen using “View Source”, as this has not had JavaScript rendered.

Can Bing render JavaScript?

Bing in 2018 said that it can render JavaScript, however, can have difficulty for JavaScript heavy websites and may not be able to process JavaScript as easily as modern browsers (and presumably Google which is now evergreen).

Presumably Bing has improved it’s rendering capabilities since then, however, we are still seeing some of our clients running JavaScript frameworks such as Angular having difficulties in having deeper pages indexed in Bing. This may be due to relying on pre-rendering (which requires being set up to serve rendered HTML to Bingbot), however, it suggests that Bing may also have difficulty in rendering JS content even if server side rendered.

Dan from Novos
Article by Dan
Dan is the Delivery Director at NOVOS. A former neuroscientist, Dan entered the world of SEO by working with one of the top SEO agencies in the country and has architectured several award-winning SEO campaigns since then. Dan joined NOVOS as an SEO manager and his vast knowledge of tech SEO has helped NOVOS execute even the most complicated eCommerce SEO plans. He is phenomenal at solving client problems and knowing exactly the high standards we want to deliver which is why he is now our Delivery Director.

Struggling with JavaScript?

We're eCommerce specialists for a reason, get in touch with us today and find out more.

I'd love to be sent your monthly newsletter for eCom tips, news and advice
Cyber warning: NOVOS (thisisnovos.com) is not affiliated with any other third party company and will never contact you other than from a legitimate thisisnovos.com email.