search
HomeWeb Front-endJS TutorialNgSysV.SEO (Search Engine Optimisation)

NgSysV.SEO (Search Engine Optimisation)

This post series is indexed at NgateSystems.com. You'll find a super-useful keyword search facility there too.

Last reviewed: Nov '24

1. Introduction

Once you've deployed your application into the Google Cloud it becomes a target for the "web spiders" that patrol the web in search of content to add to their keyword "indexes". Once your site is indexed, people may see it in Search Engine returns.

This is great if it all works. The search engine will drive business in your direction and won't charge you a penny. But in practice, you have to encourage the spiders to index your site prominently. This is what "search engine optimisation" (SEO, for short) is all about - .

Getting good SEO for your site involves:

  • Providing a sitemap to help the spiders navigate your site
  • Using SSR (Server-side-rendering) and Pre-rendering to make your "crawl budget" go further
  • Helping the bots to locate useful "index-worthy" content in your pages

2. Providing sitemap and robots files to guide web spiders

Your site should provide a sitemap file that lists all the routes you want Google (and other search engines) to index. Indexing spiders will usually discover them anyway, provided pages in your site's "tree" hierarchy are properly linked via anchor links. But, problems may arise if your site is large or new and still poorly referenced by other sites.

These problems are fixed by creating a "site map" file. Site maps can be formatted in several ways, but at its simplest, the indexing engine will be happy with a simple text file that lists your pages as follows:

// /static/sitemap.txt    - Don't copy this line
https://myProjectURL/inventory-display
https://myProjectURL/inventory-maintenance
etc

Note the following:

  • Pages deployed to the Google app engine are automatically provisioned with an https (encrypted) URL
  • "myProjectURL" will most likely be a "custom" URL that you have explicitly linked to your deployment URL.
  • You only need to add extensions to the "clean" URLs shown above if these are static ".pdf" files or similar.
  • A text sitemap can be called whatever you like, but it's customary to call it "sitemap.txt". In a Svelte webapp, however, you must store this in your project's static folder so that it gets built into your yaml file and deployed to the root of your webapp.

The robots file provides a "partner" to the sitemap file that:

  • Blocks specific spiders: You can block certain web crawlers from accessing certain parts of your site.
  • Blocks specific directories: For example, you might block /admin/ or /private/ to keep those pages out of search engine indexes.
  • Specifies the sitemap's location.

Here's an example

// /static/sitemap.txt    - Don't copy this line
https://myProjectURL/inventory-display
https://myProjectURL/inventory-maintenance
etc

In a Svelte project, the robots.txt file (mandatory filename) must be stored in a /static/robots.txt file.

You can check that your robots.txt and sitemap.txt files are being correctly deployed to your project's URL root by trying to view them using your browser:

Each of the following URLs entered into the browser's "search" box should respond by displaying the file contents.

// /static/robots.txt     - Don't copy this line
User-agent: *
Disallow: https://myProjectURL/inventory-maintenance
Sitemap: https://myProjectURL/sitemap.txt

Further information on all these issues can be found at Learn about sitemaps

Once you've successfully deployed your sitemap you might find it useful to give Google a "heads-up" by submitting the sitemap to the Google Search Console.

You start here by registering a "principal" - ie the URL of your site. This involves running a procedure that enables you to assure Google that you own the site. The procedure starts with the console downloading a "site-verification" file into your "downloads" folder. You must copy this into your Svelte static folder and rebuild/redeploy your webapp to upload the file to your remote site. If Google can find the file with the content it is expecting when you click the "Verify" button on the authentication screen, it will be satisfied that you genuinely are the owner.

Clicking on the "sitemaps" tool in the menu on the left of the screen will now enable you to enter your sitemap URL (sitemap.txt) and get a "success " status in the Submitted Sitemaps window

The Search Console is a sophisticated tool for monitoring the progress of indexing on your site and resolving any problems that might have been reported. See Get started with Search Console for further details

3. Using "Server-side-rendering" and "Pre-rendering" to make your "crawl budget" go further

While, in recent years, search engines have got better at indexing content rendered with client-side JavaScript, they are happier with pages that contain only HTML. Server-side rendered (SSR) content (ie pages whose HTMl has already been generated by running database-access javascript on the server) is indexed more frequently and reliably. Nobody but Google knows how their indexing engines work, but a reasonable guess runs something like this.

First, your webapp is awarded a "site ranking" (determined in an obscure manner, but probably influenced by the number of "backlinks" on sites that reference your URL). This in turn awards you a certain "crawl budget" - the amount of time the indexing engine is prepared to spend indexing your pages. You'll want to spend this wisely. Server-side rendering eases the bot's workload and makes your budget go further. So, if you want good SEO you should use use SSR!

The ultimate expression of service-side rendering is where a "static" page - one that displays data that either never changes or changes only rarely - is rendered at build time by presence of the following statement to its page.js or page.server.js file:

// /static/sitemap.txt    - Don't copy this line
https://myProjectURL/inventory-display
https://myProjectURL/inventory-maintenance
etc

Because the server now only has to download pure HTML, your crawl budget goes even further and your users receive a lightning-fast response! See Post 4.3 for details of an arrangement to automate pre-rendering builds using a scheduler.

4. Helping the bots to locate useful "index-worthy" content in your pages

Google's docs at Overview of crawling and indexing topics contain everything you know. Here's a summary:

First of all, you need to get your head around Google's "Mobile first" policy. The Google spider will analyse your site as it would be seen by a browser running on a mobile phone. This means that it will downgrade your site's "reputation" (and its crawl budget) if it considers, for example, that your font size is too small.

If your webapp has been designed for desktop users, this will come as a blow to you. Try your site on your phone and you will likely conclude it is completely useless.

The way out of this is to use "responsive styling" (see Post 4.4 so that the webapp senses the page width of the device it's running on and adjusts things accordingly.

It may be that parts of your webapp aren't appropriate for website operation. You may seek to remove these, but Google would remind you that most of its indexing comes from mobile pages. They recommend you gently conceal such content behind tabs or "accordions".

What web spiders are primarily looking for is content - information that search engine customers will find useful. But they need your assistance in locating and interpreting this. Hereare some tips on how you might do this@

  • Give each page well-written and unique , <meta name="description" content=" ... "> and <link> elements inside a <head> code block. Here's an example: </head>
// /static/robots.txt     - Don't copy this line
User-agent: *
Disallow: https://myProjectURL/inventory-maintenance
Sitemap: https://myProjectURL/sitemap.txt

This arrangement delegates to Svelte the awkward task of inserting the , <meta> and <link> elements of into the DOM. The <link> element here tells the indexing bot which "brand" of a website that might be reachable variously as "https://myUrl" and "https://myUrl/" etc, etc is the "main" or "preferred" version. Ask chatGPT for a tutorial on the word "canonical" if you'd like the full story.

// /static/sitemap.txt    - Don't copy this line
https://myProjectURL/inventory-display
https://myProjectURL/inventory-maintenance
etc
  • Use "structured" data descriptions in sites (such as "recipe" sites) displaying fixed classes of information in a tightly defined format. "Structured data" in this context references a standardized format for providing information about a page and classifying its content. The most common format for structured data on the web is the one published by schema.org. Ask chatGPT for an example if you'd like to know more about this and how you would use structured data in a Svelte webapp.

The above is the detailed content of NgSysV.SEO (Search Engine Optimisation). For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
JavaScript in Action: Real-World Examples and ProjectsJavaScript in Action: Real-World Examples and ProjectsApr 19, 2025 am 12:13 AM

JavaScript's application in the real world includes front-end and back-end development. 1) Display front-end applications by building a TODO list application, involving DOM operations and event processing. 2) Build RESTfulAPI through Node.js and Express to demonstrate back-end applications.

JavaScript and the Web: Core Functionality and Use CasesJavaScript and the Web: Core Functionality and Use CasesApr 18, 2025 am 12:19 AM

The main uses of JavaScript in web development include client interaction, form verification and asynchronous communication. 1) Dynamic content update and user interaction through DOM operations; 2) Client verification is carried out before the user submits data to improve the user experience; 3) Refreshless communication with the server is achieved through AJAX technology.

Understanding the JavaScript Engine: Implementation DetailsUnderstanding the JavaScript Engine: Implementation DetailsApr 17, 2025 am 12:05 AM

Understanding how JavaScript engine works internally is important to developers because it helps write more efficient code and understand performance bottlenecks and optimization strategies. 1) The engine's workflow includes three stages: parsing, compiling and execution; 2) During the execution process, the engine will perform dynamic optimization, such as inline cache and hidden classes; 3) Best practices include avoiding global variables, optimizing loops, using const and lets, and avoiding excessive use of closures.

Python vs. JavaScript: The Learning Curve and Ease of UsePython vs. JavaScript: The Learning Curve and Ease of UseApr 16, 2025 am 12:12 AM

Python is more suitable for beginners, with a smooth learning curve and concise syntax; JavaScript is suitable for front-end development, with a steep learning curve and flexible syntax. 1. Python syntax is intuitive and suitable for data science and back-end development. 2. JavaScript is flexible and widely used in front-end and server-side programming.

Python vs. JavaScript: Community, Libraries, and ResourcesPython vs. JavaScript: Community, Libraries, and ResourcesApr 15, 2025 am 12:16 AM

Python and JavaScript have their own advantages and disadvantages in terms of community, libraries and resources. 1) The Python community is friendly and suitable for beginners, but the front-end development resources are not as rich as JavaScript. 2) Python is powerful in data science and machine learning libraries, while JavaScript is better in front-end development libraries and frameworks. 3) Both have rich learning resources, but Python is suitable for starting with official documents, while JavaScript is better with MDNWebDocs. The choice should be based on project needs and personal interests.

From C/C   to JavaScript: How It All WorksFrom C/C to JavaScript: How It All WorksApr 14, 2025 am 12:05 AM

The shift from C/C to JavaScript requires adapting to dynamic typing, garbage collection and asynchronous programming. 1) C/C is a statically typed language that requires manual memory management, while JavaScript is dynamically typed and garbage collection is automatically processed. 2) C/C needs to be compiled into machine code, while JavaScript is an interpreted language. 3) JavaScript introduces concepts such as closures, prototype chains and Promise, which enhances flexibility and asynchronous programming capabilities.

JavaScript Engines: Comparing ImplementationsJavaScript Engines: Comparing ImplementationsApr 13, 2025 am 12:05 AM

Different JavaScript engines have different effects when parsing and executing JavaScript code, because the implementation principles and optimization strategies of each engine differ. 1. Lexical analysis: convert source code into lexical unit. 2. Grammar analysis: Generate an abstract syntax tree. 3. Optimization and compilation: Generate machine code through the JIT compiler. 4. Execute: Run the machine code. V8 engine optimizes through instant compilation and hidden class, SpiderMonkey uses a type inference system, resulting in different performance performance on the same code.

Beyond the Browser: JavaScript in the Real WorldBeyond the Browser: JavaScript in the Real WorldApr 12, 2025 am 12:06 AM

JavaScript's applications in the real world include server-side programming, mobile application development and Internet of Things control: 1. Server-side programming is realized through Node.js, suitable for high concurrent request processing. 2. Mobile application development is carried out through ReactNative and supports cross-platform deployment. 3. Used for IoT device control through Johnny-Five library, suitable for hardware interaction.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.