Home >Web Front-end >JS Tutorial >NgSysV.SEO (Search Engine Optimisation)
This post series is indexed at NgateSystems.com. You'll find a super-useful keyword search facility there too.
Last reviewed: Nov '24
Once you've deployed your application into the Google Cloud it becomes a target for the "web spiders" that patrol the web in search of content to add to their keyword "indexes". Once your site is indexed, people may see it in Search Engine returns.
This is great if it all works. The search engine will drive business in your direction and won't charge you a penny. But in practice, you have to encourage the spiders to index your site prominently. This is what "search engine optimisation" (SEO, for short) is all about - .
Getting good SEO for your site involves:
Your site should provide a sitemap file that lists all the routes you want Google (and other search engines) to index. Indexing spiders will usually discover them anyway, provided pages in your site's "tree" hierarchy are properly linked via anchor links. But, problems may arise if your site is large or new and still poorly referenced by other sites.
These problems are fixed by creating a "site map" file. Site maps can be formatted in several ways, but at its simplest, the indexing engine will be happy with a simple text file that lists your pages as follows:
// /static/sitemap.txt - Don't copy this line https://myProjectURL/inventory-display https://myProjectURL/inventory-maintenance etc
Note the following:
The robots file provides a "partner" to the sitemap file that:
Here's an example
// /static/sitemap.txt - Don't copy this line https://myProjectURL/inventory-display https://myProjectURL/inventory-maintenance etc
In a Svelte project, the robots.txt file (mandatory filename) must be stored in a /static/robots.txt file.
You can check that your robots.txt and sitemap.txt files are being correctly deployed to your project's URL root by trying to view them using your browser:
Each of the following URLs entered into the browser's "search" box should respond by displaying the file contents.
// /static/robots.txt - Don't copy this line User-agent: * Disallow: https://myProjectURL/inventory-maintenance Sitemap: https://myProjectURL/sitemap.txt
Further information on all these issues can be found at Learn about sitemaps
Once you've successfully deployed your sitemap you might find it useful to give Google a "heads-up" by submitting the sitemap to the Google Search Console.
You start here by registering a "principal" - ie the URL of your site. This involves running a procedure that enables you to assure Google that you own the site. The procedure starts with the console downloading a "site-verification" file into your "downloads" folder. You must copy this into your Svelte static folder and rebuild/redeploy your webapp to upload the file to your remote site. If Google can find the file with the content it is expecting when you click the "Verify" button on the authentication screen, it will be satisfied that you genuinely are the owner.
Clicking on the "sitemaps" tool in the menu on the left of the screen will now enable you to enter your sitemap URL (sitemap.txt) and get a "success " status in the Submitted Sitemaps window
The Search Console is a sophisticated tool for monitoring the progress of indexing on your site and resolving any problems that might have been reported. See Get started with Search Console for further details
While, in recent years, search engines have got better at indexing content rendered with client-side JavaScript, they are happier with pages that contain only HTML. Server-side rendered (SSR) content (ie pages whose HTMl has already been generated by running database-access javascript on the server) is indexed more frequently and reliably. Nobody but Google knows how their indexing engines work, but a reasonable guess runs something like this.
First, your webapp is awarded a "site ranking" (determined in an obscure manner, but probably influenced by the number of "backlinks" on sites that reference your URL). This in turn awards you a certain "crawl budget" - the amount of time the indexing engine is prepared to spend indexing your pages. You'll want to spend this wisely. Server-side rendering eases the bot's workload and makes your budget go further. So, if you want good SEO you should use use SSR!
The ultimate expression of service-side rendering is where a "static" page - one that displays data that either never changes or changes only rarely - is rendered at build time by presence of the following statement to its page.js or page.server.js file:
// /static/sitemap.txt - Don't copy this line https://myProjectURL/inventory-display https://myProjectURL/inventory-maintenance etc
Because the server now only has to download pure HTML, your crawl budget goes even further and your users receive a lightning-fast response! See Post 4.3 for details of an arrangement to automate pre-rendering builds using a scheduler.
Google's docs at Overview of crawling and indexing topics contain everything you know. Here's a summary:
First of all, you need to get your head around Google's "Mobile first" policy. The Google spider will analyse your site as it would be seen by a browser running on a mobile phone. This means that it will downgrade your site's "reputation" (and its crawl budget) if it considers, for example, that your font size is too small.
If your webapp has been designed for desktop users, this will come as a blow to you. Try your site on your phone and you will likely conclude it is completely useless.
The way out of this is to use "responsive styling" (see Post 4.4 so that the webapp senses the page width of the device it's running on and adjusts things accordingly.
It may be that parts of your webapp aren't appropriate for website operation. You may seek to remove these, but Google would remind you that most of its indexing comes from mobile pages. They recommend you gently conceal such content behind tabs or "accordions".
What web spiders are primarily looking for is content - information that search engine customers will find useful. But they need your assistance in locating and interpreting this. Hereare some tips on how you might do this@
// /static/robots.txt - Don't copy this line User-agent: * Disallow: https://myProjectURL/inventory-maintenance Sitemap: https://myProjectURL/sitemap.txt
This arrangement delegates to Svelte the awkward task of inserting the
// /static/sitemap.txt - Don't copy this line https://myProjectURL/inventory-display https://myProjectURL/inventory-maintenance etc
The above is the detailed content of NgSysV.SEO (Search Engine Optimisation). For more information, please follow other related articles on the PHP Chinese website!