SEO対策の東京SEOメーカー

Fundamentals of JavaScript SEO and Strategies for Enhancing User Experience

In the modern digital landscape, constructing websites and web applications with JavaScript necessitates ensuring that content is easily discoverable by users. While traditional website development involves collaboration between coders and directors, JavaScript SEO introduces unique challenges that demand a specialized understanding of SEO principles.

The complexity of JavaScript SEO stems from the inclusion of a rendering process, distinct from that of conventional websites. Despite functioning flawlessly in a JavaScript environment, if not designed with SEO in mind, content may fail to appear in Google searches. This issue primarily arises from client-side rendering, as opposed to server-side rendering.

Particularly for Single Page Applications (SPAs), it becomes crucial to apply JavaScript SEO principles thoughtfully.

SEO相談

How Google Processes JavaScript-Based Web Apps

Google processes JavaScript-based web applications through three phases

  • Crawling
  • Rendering
  • Indexing

For standard websites, Googlebot’s procedure typically involves crawling and indexing before ranking. However, JavaScript-dependent sites introduce an additional step – rendering. While server-side rendering facilitates straightforward crawling and analysis of URLs and site content, sites relying on JavaScript might render content client-side. Consequently, Google may not immediately access content generated by JavaScript.

In essence, the rendering approach – whether server-side or client-side – distinguishes traditional websites from JavaScript-based sites or applications. Improper JavaScript implementation can hinder Google’s ability to accurately interpret a website.

For developers and SEO specialists working with JavaScript for website or web app creation, it’s imperative to not only adhere to best coding practices but also to incorporate SEO-friendly scripting mindful of Googlebot. This approach ensures visibility in Google’s search results, necessitating a blend of conventional SEO knowledge and coding expertise in the realm of JavaScript SEO.


Implementing JavaScript SEO on Google: A Guide to Enhancing User Experience

Even within the realm of JavaScript SEO, the approach to optimization mirrors traditional SEO practices for standard websites. However, a key distinction lies in coding methodologies. While JavaScript code may function correctly, it’s crucial to ensure it communicates effectively with Google to achieve desired SEO outcomes.

Key areas of focus in JavaScript SEO include

-Title and Description

-Compatible Coding

-HTTP Status

-Usage of History API

-Setting Canonical

Create pages with unique titles and descriptions

Title configuration is crucial for SEO measures and is known as a cost-effective strategy. Every page must have a title tag that is distinct from others, adhering to the rule that no two pages should have overlapping titles. Furthermore, adding a meta description makes it easier for users to find the desired site.

Both the title and meta description must succinctly represent the page content. Especially when there’s a significant discrepancy between the title and the content, it could adversely affect the search ranking. Therefore, it’s important to align the title and content with the user’s search intent.

Both elements can be set using JavaScript, so immediate action is required if they have not been implemented yet.

JavaScript’s rapid evolution and the plethora of APIs provided by browsers mean that not all JavaScript functionalities align with Google’s supported APIs. To bridge this gap, employing differential serving and polyfills (JavaScript code that allows newer features to function in older browsers) becomes essential.

Although often overlooked in standard website development, when implementing JavaScript SEO, it becomes crucial to ensure that your coding is compatible. This compatibility directly influences whether Google can recognize your pages. Therefore, a strong focus on creating content that is compatible is required.

Proper Use of HTTP Status Codes 

Googlebot employs HTTP status codes to ascertain any issues encountered during page crawling.

There are several key HTTP status codes, which include:

HTTP Status Codes | The Impact on Google

200 (Success): Indicates successful access, which may lead to indexing.

301 (Moved Permanently): Denotes a permanent redirect, strongly suggesting that the redirect target is the canonical page.

302 (Found): Implies a temporary redirect, weakly suggesting that the redirect target is the canonical page.

404 (Not Found): The page cannot be found, thus if new, it will not be indexed, and if previously indexed, it may be removed.

503 (Service Unavailable): Due to server error, the URL may be targeted for index removal.

Correctly using HTTP status codes affects indexing and the possibility of being indexed. Therefore, when conducting JavaScript SEO with search in mind, it’s necessary to address this. However, Single Page Applications (SPAs) might not be able to use HTTP status codes.

In such cases, to avoid soft 404 errors, please implement one of the following:

-Use JavaScript to redirect to a URL that returns a 404 HTTP status code.

-Employ JavaScript to add a <meta name=”robots” content=”noindex”> tag to the error page.


Sample Code for Implementing Redirects

fetch(`/api/products/${productId}`)

.then(response => response.json())

.then(product => {

if(product.exists) {

showProductDetails(product); // shows the product information on the page

} else {

// this product does not exist, so this is an error page.

window.location.href = ‘/not-found’; // redirect to 404 page on the server.

}

})

Sample Code for Using the Noindex Tag

fetch(`/api/products/${productId}`)

.then(response => response.json())

.then(product => {

if(product.exists) {

showProductDetails(product); // shows the product information on the page

} else {

// this product does not exist, so this is an error page.

// Note: This example assumes there is no other robots meta tag present in the HTML.

const metaRobots = document.createElement(‘meta’);

metaRobots.name = ‘robots’;

metaRobots.content = ‘noindex’;

document.head.appendChild(metaRobots);

}

})

Using the History API

Only anchor tags with href attributes are crawlable by Google. While SPAs can load content using hashes or fragments, Google cannot parse URLs with fragments. 

For JavaScript SEO, construct links using anchor tags to ensure Googlebot can follow them.

Inappropriate Use of Fragments Example

<nav>

<ul>

<li><a href=”#/products”>Our products</a></li>

<li><a href=”#/services”>Our services</a></li>

</ul>

</nav>

 

<h1>Welcome to example.com!</h1>

<div id=”placeholder”>

<p>Learn more about <a href=”#/products”>our products</a> and <a href=”#/services”>our services</p>

</div>

<script>

window.addEventListener(‘hashchange’, function goToPage() {

// this function loads different content based on the current URL fragment

const pageToLoad = window.location.hash.slice(1); // URL fragment

document.getElementById(‘placeholder’).innerHTML = load(pageToLoad);

});

</script>

Appropriate Example Implementing the History API

<nav>

<ul>

<li><a href=”/products”>Our products</a></li>

<li><a href=”/services”>Our services</a></li>

</ul>

</nav>

 

<h1>Welcome to example.com!</h1>

<div id=”placeholder”>

<p>Learn more about <a href=”/products”>our products</a> and <a href=”/services”>our services</p>

</div>

<script>

function goToPage(event) {

event.preventDefault(); // stop the browser from navigating to the destination URL.

const hrefUrl = event.target.getAttribute(‘href’);

const pageToLoad = hrefUrl.slice(1); // remove the leading slash

document.getElementById(‘placeholder’).innerHTML = load(pageToLoad);

window.history.pushState({}, window.title, hrefUrl) // Update URL as well as browser history.

}

 

// Enable client-side routing for all links on the page

document.querySelectorAll(‘a’).forEach(link => link.addEventListener(‘click’, goToPage));

 

</script>

Setting Canonical Tags

While typically not recommended with JavaScript, setting canonical tags via JavaScript is feasible. However, it is possible to set canonical URLs using JavaScript. When using JavaScript, the canonical URL is retrieved at the time of rendering, allowing the correct URL signal to be communicated.

fetch(‘/api/cats/’ + id)

.then(function (response) { return response.json(); })

.then(function (cat) {

// creates a canonical link tag and dynamically builds the URL

// e.g. https://example.com/cats/simba

const linkTag = document.createElement(‘link’);

linkTag.setAttribute(‘rel’, ‘canonical’);

linkTag.href = ‘https://example.com/cats/’ + cat.urlFriendlyName;

document.head.appendChild(linkTag);

});

Enhancing User Experience with JavaScript SEO

Even when using JavaScript, the considerations for user experience remain unchanged. However, settings that are typically available for standard websites may not always be applicable when JavaScript is in use.

To enhance user experience with JavaScript SEO, it is particularly important to pay attention to the following aspects.

  • Use of robots meta tags
  • Leveraging long-term caching
  • Structured data implementation
  • Proper rendering techniques
  • Lazy loading images
  • Accessibility-conscious design

Use of Robots Meta Tags

Robots meta tags control how Google indexes pages or follows links. While using noindex or nofollow in meta tags is uncommon, it’s applicable for pages that don’t require Google indexing. If content isn’t meant to be indexed, JavaScript can adjust these settings dynamically based on API response outcomes, preventing unnecessary indexing.

fetch(‘/api/products/’ + productId)

.then(function (response) { return response.json(); })

.then(function (apiResponse) {

if (apiResponse.isError) {

// get the robots meta tag

var metaRobots = document.querySelector(‘meta[name=”robots”]’);

// if there was no robots meta tag, add one

if (!metaRobots) {

metaRobots = document.createElement(‘meta’);

metaRobots.setAttribute(‘name’, ‘robots’);

document.head.appendChild(metaRobots);

}

// tell Google to exclude this page from the index

metaRobots.setAttribute(‘content’, ‘noindex’);

// display an error message to the user

errorMsg.textContent = ‘This product is no longer available’;

return;

}

// display product information

// …

});

Utilizing Long-Term Cache Storage

Googlebot actively uses caching to reduce network requests and resource usage. However, the Web Rendering Service (WRS) may ignore cache headers, leading to the use of outdated JavaScript or CSS.

To resolve this issue, it is necessary to change the URL of resources and ensure that users download new information whenever content changes. This can be achieved by incorporating a fingerprint (information that distinguishes individual users) or version number into the file name.

When responding to requests for URLs that contain a fingerprint or version information and where the content does not change, please add Cache-Control: max-age=31536000 (31,536,000 seconds, equivalent to one year) to the response.

Using Structured Data

There are two methods for generating structured data: creating structured data using JavaScript or adding information to structured data that has been rendered on the server side. In either case, the structured data is recognized and processed at the time of page rendering.

When using JavaScript to handle structured data, you will generate JSON-LD and insert it into the page. To avoid any issues, always perform implementation tests using the Rich Results Test.

An example of generating structured data with JavaScript

fetch(‘https://api.example.com/recipes/123’)

.then(response => response.text())

.then(structuredDataText => {

const script = document.createElement(‘script’);

script.setAttribute(‘type’, ‘application/ld+json’);

script.textContent = structuredDataText;

document.head.appendChild(script);

});

Correct Rendering

Google can only recognize content that appears in the rendered HTML. To ensure that Google continues to recognize content after rendering, it is necessary to check the rendered HTML using tools such as the Rich Results Test and the URL Inspection Tool.

If content is not displayed correctly in the rendered HTML, Google cannot index it. Therefore, in JavaScript SEO, it is essential to verify that rendering is performed correctly.

For instance, coding like the following enables Google to index the content:

<script>

class MyComponent extends HTMLElement {

constructor() {

super();

this.attachShadow({ mode: ‘open’ });

}

 

connectedCallback() {

let p = document.createElement(‘p’);

p.innerHTML = ‘Hello World, this is shadow DOM content. Here comes the light DOM: <slot></slot>’;

this.shadowRoot.appendChild(p);

}

}

window.customElements.define(‘my-component’, MyComponent);

</script>

<my-component>

<p>This is light DOM content. It’s projected into the shadow DOM.</p>

<p>WRS renders this content as well as the shadow DOM content.</p>

</my-component>

Implementing Lazy Loading for Images

Due to the substantial load images can place on network resources and performance, it is recommended to lazy load images only when the user attempts to view them. Delaying the loading of non-critical or off-screen content, known as “lazy loading,” is frequently recommended as a practice to enhance performance and user experience. However, if not implemented correctly, lazy loading could prevent Google from recognizing the targeted content.

To ensure Google can detect all content on a page, lazy loading should be implemented in such a way that related content loads when it becomes visible in the viewport. This can be achieved through:

-Utilizing native lazy loading for images and iframes.

-Using the IntersectionObserver API and polyfills.

-Employing JavaScript libraries that load data only when it becomes visible in the viewport.

Designing with Accessibility in Mind

While creating pages that cater to search engines and users is a given, designing sites with JavaScript SEO in mind should also consider the needs of users without JavaScript-enabled browsers.

The most reliable and straightforward method to test a site’s accessibility is to browse it with JavaScript disabled. Displaying the site in text-only mode can help identify other content that may be difficult for Google to access, such as text embedded in images.

Summary

From the perspective of an SEO consultant, there is no difference between standard websites and JavaScript SEO in terms of objectives. However, JavaScript introduces the additional step of rendering, necessitating specific measures to accommodate this process. Even if coding functions correctly, special settings may be required with SEO in mind, making it important to understand what needs to be considered before production begins. Especially in SEO for Single Page Applications (SPAs), the points that can be addressed are limited, requiring targeted measures with a focus on rendering.

新着記事

popular

Webmarketing

SEO