Webshop and eCommerce store in Denmark 2020
Webshop and eCommerce stores have been booming worldwide in 2020, and especially in Denmark. Many factors have been at play to drive this digital growth. However, the most prominent might…
This is a full-scale example of Miklagárd SEO Teams last years (2018) Technical SEO Audit document with the top most issues and general recommendations, now available for the wider public, i.e. you! Accompanying this audit document, we include an Audit Action Items sheet containing all the relevant tabs mentioned throughout the audit. Within this sheet is normally a ‘Summary’ tab that Miklagárd recommend your development team use as a checklist for implementing our recommendations, that is, if Miklagárd is not assigned and handling the full-scale SEO audit and implementation of your website.
The Technical SEO Audit document provide insight and recommendations regarding the technical onsite web ‘elements’ and configuration of your website.
Miklagárd provides recommendations to improve the technical performance and subsequent visibility of your website in relation to search engine optimisation (SEO).
Miklagárd use a range of leading SEO tools to research a wide variety of technical website ‘elements’. These elements include on-page as well as off-page factors that influence how a website is evaluated by a search engine and subsequently perform in terms of site visibility and ranking in search engine results pages (SERPs).
Each element is reviewed and then assigned an evaluation grade based upon the priority and degree of importance;
Miklagárd will when necessary provide links to reports from Google Analytics, Google Search Console, so it would be a good idea to log in to these in your default browser. If you have a personal or other professional Gmail (Google Apps login), you may want to log out of it in your default browser and use a secondary browser.
Since the beginning of the Internet, search engines have sought to serve up web pages that searchers like and engage with, without immediately returning to the search results web page to try again. Taken in aggregate over millions of queries a day, search engines build up a good pool of data to judge the quality of their results. Google, in particular, uses data from Chrome, its toolbar, and Android devices to measure these engagement metrics. It is therefore essential that each web page on the site includes relevant, unique, and fresh content that is strategically optimized to rank for a site’s targeted keywords.
Common content problems and how to deal with them:
These web pages have less than ~450 words within the body content of the page. Google needs enough content per web page to calculate the pages content and meaning to better rank the web page for relevant queries.
Step 1: Identify top web pages with thin content.
Step 2: “Thicken” these web pages to allow Google to better understand their meaning and context.
These web pages can be under-optimized for the keyword(s) you want them to rank for. One unintended consequence of this is keyword cannibalisation. This occurs when you have multiple web pages optimized for the same keyword, and the search engines can’t determine which web page they should send searchers to for that keyword.
Step 1: Identify potential query cannibalisation.
Step 2: Optimising these web page by reviewing the title and description, adding more content and aggregating user intent will give the web page more value to both users and Google.
The solution with eliminate the issue of preventing users from engaging with it for their search query.
Search engines use the h1
, h2
, h3
header tag to interpret what a web page is about, much the same way readers of a newspaper use article titles to get an idea of what an article is about. When web pages are missing header tag, it is harder for both visitors and search engines to decipher what the web page is about.
Step 1: Identify the most important heading within web pages with multiple h1
s and make those headings h1
.
Step 2: The least important heading can be tagged as h2
or h3
headers.
The header tag should contain a judicious use of the keyword(s) you are targeting the web page for. Be careful to make the language natural and not keyword stuffed. If you have to decide between user experience and SEO, go with user experience.
Following the above newspaper analogy in the above issue about missing header tags, section, you wouldn’t have two titles on a newspaper or magazine article.
Step 1: Identify web pages missing h1
header tags.
Step 2: Add h1
heading to all web pages missing h1
tags.
A web page should only contain one h1
tag – it tells the reader what your web page is about.
Search engines frequently crawl websites in order to determine which ones should be indexed in their massive databases. Search engine crawlers – also known as robots, bots, or spiders – collect web pages they deem high quality.
Over time, search engines will continue to request previously downloaded web pages to check for content updates. A certain amount of bandwidth is then allocated for the periodic reviews based on the web pages’ calculated relevancy. Every time a web page is downloaded, bandwidth is used, and once a website’s allocated bandwidth limit is reached, no more web pages will be crawled until the next review. This is referred to as a site’s crawl budget, and it works just like any other budget. But in the case of a search engine, when your site’s crawl budget has been exhausted, the bot moves on.
Since there is a limited amount of allotted bandwidth, it is crucial to direct the crawlers to the content you most want to be included in the search engine’s index of web content and eliminate any unnecessary or duplicate content, so as to avoid search engine crawlers dropping a crawl too early, a process known as crawl abandonment.
Google uses a variety of metrics, including PageRank, to determine a website’s most important web pages and then crawls them more frequently.
To use your website’s crawl budget effectively, it is important to investigate errors reported in Google’s Search Console (formerly Google Webmaster Tools) to make sure that those web pages render properly. If a web page can’t be fixed, due diligence must be applied to make sure the web page doesn’t negatively impact the rest of the site. This can be done several ways:
301
Redirect. Redirect these web pages to their new URLs. Use a 302
if the redirect is truly temporary (e.g., a product that’s out of stock).4xx
(e.g., 404
, 410
) status code but remove these web pages from your XML sitemap(s).4xx
web pages have received significant traffic or links, you need to make sure you 301
redirect these web pages.Common crawl problems and how to deal with them:
Googlebot need to spend time downloading (or crawling) a web page. The more time Googlebot spends crawling a page the more “crawl budget” is being used up. Dips in crawl rate is seen when the time spent crawling a web page has exceeded the average, which indicates an inefficient crawl.
Step 1: Identify pages with slow page load time.
Step 2: If the pages are not important consider removing them to save crawl budget or optimizing the page’s load time.
Excessively slow pages are using up crawl budget.
Tip! Use Google Analytics: Behaviour > Site Speed > Page Timings and select Avg. Page Load Times to compare page load to the website’s average.
Certain webpages on your website are receiving organic traffic and others are rarely, or not at all. If they are all indexed web pages that needs to be crawl to receive traffic, which ones are e.g. the ones that just received teen visit in the past month or one visit the past six months etc. Indexing pages that do not generate web traffic has a direct impact on the crawl budget.
Step 1: Identify “dead-weight” web pages that don’t receive any traffic
Step 2: Remove index pages from Google’s index to improve crawl efficiency to the web pages that do matter.
Search engines removing important pages from their indices because of broken web page. Examples of important pages that are those that have attracted quality links, garnered traffic, or driven conversions.
Step 1: Identify web pages receiving links and traffic that are serving 404
errors.
Step 2: Redirect these pages with the correct redirect code to their corresponding relevant page.
Note. Not all 404
pages are bad. Sometimes pages are removed, and there really aren’t good pages to redirect them to. Google understand that and will eventually remove pages returning a 404
from their indices.
Landing on a default 404
error web page and not knowing why the website is not returning what was expected or how to proceed from the error page.
Step 1: Improve the 404
web page template for a better user experience. Consider:
– Add links to your most popular articles or posts, as well as a link to your website’s homepage.
– Have a site search box somewhere on your custom 404
page (sophisticated visitors tend to search; less advanced visitors tend to browse).
Tip. A good custom 404
web page should help keep visitors on your site and help them find the information they’re looking for.
Cloaking is a server technique that some sites utilize to try to fool the search engines into awarding rankings a website doesn’t deserve. A website shows one version of a web page to users and a different version – many times overly optimized – to search engines. This is a stealth method that Google and the other search engines consider deceptive since it attempts to bias the spiders into ranking the web page for terms that should be out of reach. If detected, cloaking could cause your site to be penalised.
Step1: Identify any cloaked URLs
Step 2: Remove cloaked URLs from the website.
Category pages that receiving traffic can create duplicate content for a site and provide minimal value in comparison to individual content pages.
Step 1: Expand content on these web pages (preferred to applying a noindex tag).
In cases where category pages are very similar, or are targeting the same keyphrase(s), a canonical HTML tag can be added pointing to the preferred page URL, instead of a noindex tag, so as to pass any link equity to the preferred page that you wish search engines to display in the SERPs.
Note. They can be made useful to both searchers and search engines by providing a paragraph or so of content pertinent to that particular category. By adding some unique content to these pages, you can keep these pages open to search engines. However, in many cases, if you don’t take that extra step, you should block them from search engines by adding a noindex follow tag to them or a canonical tag pointing to a similar, preferred page URL.
Non-existent pages in paginated series can be crawled and produce 404
errors in Google Search Console reports. Even blank web pages, although they serve a 200
HTTP response, have no content listed. The pagination HTML markup references links to these extra blank web pages which will then be crawled by Google.
Step 1: Identify non-existent and blank pages in pagination.
Step 2: Redirect 301
web pages that have been removed or serve a 410
response code.
Step 3: Remove blank paginated pages (remove references to additional pages, with no contet listed in series, from HTML markup and rel="next"
/"prev"
HTML markup).
XML sitemaps are designed to help search engines discover and index your website’s most valuable pages. The sitemap consists of an XML file that lists all the URLs of the website that you want to compete in organic search. This XML file also contains ancillary information, such as when a web page has been updated, its update frequency, and relative importance.
Google recommends creating separate sitemaps for different types of content: images, videos, and text, for example. It is important to update the sitemaps each time new content is published with a multi-format XML sitemap, i.e. an image and video XMLsitemap.
It’s important to update the sitemap only with the original versions of the URLs. They should not redirect or return errors. In other words, every web page in the sitemap should return a 200
status code. Anything other than a 200
status code is regarded by the search engines as “dirt” in a sitemap.
Finally, you should upload these sitemaps to Google webmaster tools. Google will tell you how many of the submitted URLs have been submitted and how many of those have been indexed.
Common XML sitemap problems and how to deal with them:
Missing speciality sitemaps like:
Consider adding video, image and Google News sitemaps to your website. Adding speciality sitemaps will help Google discover your content more effectively.
Pages with noindex tags were found in the sitemap. Adding blocked pages is wasteful, because you only want web pages in our sitemaps that you want indexed by search engines.
Step 1: Identify the web pages you don’t need in the sitemap.
Step 2: Remove the web pages that don’t need to be in the sitemap.
The robots.txt
file is an important text file placed in a web domain’s root directory. This file provides search engines directives regarding the crawling of the site, from blocking the crawling of specific web pages or directories to the entire site.
It’s not recommended to use robots.txt
to prevent the indexing of website content that has quality inbound links pointing to it because, if the search engines can’t access a web page, they can’t credit you for those links. Similarly, you shouldn’t use robots.txt
to block a web page that contains valuable links to other web pages on your site. If the bots can’t access the web page, they can’t follow links on that blocked web page, which means authority will not be passed to those linked web pages.
Even if you choose to not use the robots.txt
file to prevent the crawling of any of the website pages, it’s still recommended to use the robots.txt
to specify the location of the website’s sitemap.xml file. If the site uses multiple sitemaps, then the link should point to a sitemap index that includes links to all the website’s sitemaps.
Common robots.txt
problems and how to deal with them:
Disallowing specific CMS files and folders to be crawled.
Websites setup in a CMS, e.g. WordPress, should disallow the following directories in the robots.txt
file: Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/themes/
A good website URL structure should at least:
-
) to separate keywords in the URLTo increase crawl efficiency and ensure that authority flows naturally from category web pages down through the rest of the site and back up to the category web pages, a few measures should be taken:
Common architecture problems and how to deal with them:
Broken external and internal links within your website.
Step 1: Identify internal and external broken links and where the content has been moved.
Step 2: Redirect the broken links to the moved content source web page. If no content can be found contact the website or redirect.
Note. Broken external links provide a poor user experience and can also affect SEO performance by sending poor quality signals to Google.
Your website have source pages containing broken internal links.
Step 1: Identify broken links.
Step 2: Relink all internal links pointing to 404
pages to their new, updated, relevant pages and categories.
Web pages that have excessive many internal links.
Step 1: Identify web pages with too many internal and external links.
Step 2: Review pages flagged with too many internal links. If the links frequency feels spammy consider reducing the number of internal/external links.
Note. Google doesn’t penalize for having more than 100 links on a page anymore, but it can still be a signal that the page is spammy.
Sometimes the most qualified page to rank for some keywords – particularly highly competitive head terms – is a category or tag page.
Consider internally linking to category pages and linking top blog pages to category pages where relevant.
Tip. One way to bolster your category pages is to make sure every URL in that category links to its corresponding category page. This can be accomplished with editorial links (i.e. links in the content itself), breadcrumbs, and/or links in template.
Redirect chain where there is more than one 301
/302
redirects response before the page response with 200
OK response. Redirect chains eat up crawl budget and in excess can slow down the website.
Step 1: Identify any redirect-chain on your website.
Step 2: Redirect chains for optimal crawl efficiency.
Note. Clean-up web pages that have been identified as having redirect chains by ensuring that each URL in the chain redirects throughout a single 301
redirect to the final destination page URL.
Using schema with microdata or JSON-LD (JavaScript Object Notation for Linked Data) in a website’s HTML to declare specific elements allows search engines to better understand the meaning of its content and provide it with higher visibility in the search results through rich snippets. A rich snippet is essentially a bite-sized summary of the content that a user can expect to see on a page.
Snippets come in different forms: an image of a video screenshot, information about a concert, star ratings, cooking time for a recipe, etc. But all serve the purpose of providing sneak peek of the content on your website.
These snippets are highly coveted they improve click through rates. They can also help reduce your website’s bounce rate because users will know they are interested in your website before they click. However,
In general, it’s recommended to start using the following microdata when specifying this type of content in HTML:
This content can be validated through the Structured Data Testing Tool inside Search Console. Google now also supports JSON-LD markup.
Common semantic markup problems and how to deal with them:
Specific content is worthy of having rich snippets, however some content is not being optimised with structured data to help influence rich snippets.
Step 1: Identify which content top web pages (category, product category, product pages etc.) and mapping out additional elements of the web page that could benefit from structured data.
Step 2: Structure data on the selected top web pages to get rich snippets to show up on Google results.
Tip. Product pages are an excellent place to start because rich snippets such as review stars increase Click-through rate (CTR).
Images offer a great opportunity to earn visibility and additional organic traffic with image and universal search results.
The use of images on the website is also helpful to make web pages more attractive, illustrate an idea, and earn visitors’ attention.
Below are a few best practices in optimizing images for search:
alt
description tags using specifically relevant keywords for each image, without keyword stuffing them. These are primarily for the benefit of visitors with disabilities using screen readers.Common image problems and how to deal with them:
Website load time is ranking factor. One factor that can bloat page load times is large images. In the rush to publish, it’s easy to overlook image size.
Step 1: Identify which images are over 100KB in size.
Step 2: Optimize all/top pages where web page load time is an issue.
Note. Images can be reduced without affecting image quality.
Images on website are missing their alt
text description tag or have irrelevant, meaning that the alt
text does not accurately describe the image.
Step 1: Identify and review images with no or poor alt
text.
Step 2: To each image add an alt
text that describe the image, using keywords when possible, to help search engines better understand the web page’s content.
Videos offer a great opportunity to earn visibility and additional organic traffic with video or universal search results.
Below are some video best practices to keep in mind to maximize impact and reach in the search engines:
Common video problems and how to deal with them:
YouTube is a great platform for videos that are more viral in nature, as well as tutorials. However, videos with a commercial or promotional purposes tend not to do well on that platform.
Note. Going with a self-hosting platform can be more beneficial for sites creating their own videos and wishing to drive engagement to their own website. This is because you’re not competing against others for views. This should only be done with cations and great measure/test to ensure higher page ranking.
Some videos on YouTube are lacked strong call-to-actions in the video to share the video on social networks.
Note. YouTube makes it easy to add annotations that encourage viewers to like videos and subscribe to your channel. You should add annotations that prompt viewers to also comment on their videos as well as link to conversion pages. It’s important to vote up and engage with people who comment on your videos. Video responses show YouTube that your video is popular and relevant. Replying demonstrates you’re approachable.
There’s anecdotal evidence that YouTube favors videos that link to other YouTube assets within annotations.
These links can be added using annotations with links to your other videos.
Note. Use annotations to encourage viewers to take actions such as commenting, sharing the video and clicking through to e.g. your conversion pages.
There’s anecdotal evidence that YouTube favors videos that link to other YouTube assets within annotations.
Note. Depending on your current practice, you should consider creating shorter/longer videos, if it varies too much from average, to stand a better chance of ranking highly within YouTube. YouTube uses watch time as a ranking signal.
HTML Title tags (technically called title elements and web page titles) define the title of a document and are required for all (X)HTML documents. It is the single-most important on-page SEO element (behind overall content) and appears in three key places: the title bar of a browser, search results pages and social websites.
As a ranking factor in organic search and a highly visible element to users in the search engine result pages (SERPs), it’s important to make sure that a site’s titles meet certain criteria:
Common HTML title problems and how to deal with them:
Web pages with HTML titles over 65 characters. HTML titles should be kept under 65 characters (or 512 pixels) to be safe.
Ensure that HTML titles are not too long, over 70 characters will truncate the title in search results which may affect Click-through rate (CTR).
Note. If you don’t, your HTML titles could be truncated in the SERPs, which has been correlated with lower click through rates.
Web pages with HTML titles that are shorter than 30 characters. Excessively short HTML titles can fall short of adequately communicating what content a viewer can expect to see when she clicks on the search result.
Ensure that all HTML titles utilise the whole 50-65 characters for optimal HTML title presentation in search results.
Web pages with duplicate HTML titles. Pages with duplicate titles are often times good indicators of duplicate content caused by pagination or URL parameters.
Step 1: Review all duplicate HTML titles.
Step 2: Remove all duplicated URLs with query parameters.
Writing titles that are both optimized and compelling is critical to both ranking and click throughs.
Step 1: Review all/top web page titles.
Step 2: Optimize web page titles to be descriptive of the content.
Meta descriptions are not a ranking factor, but they are highly visible to users in the SERPs and can affect click through rates. Google highlights not only search query terms but also synonyms in meta descriptions, making these keywords pop off the page for searchers.
You should keep a few things in mind when writing meta descriptions:
Common meta description problems and how to deal with them:
Missing meta
descriptions. Any page that is intended to be public facing should have a meta
description.
Step 1: Identify and review meta
descriptions that are missing.
Step 2: Add meta
description.
Long descriptions will be truncated, reducing their readability and click through rates. Short descriptions have an impact on website traffic.
Step 1: Identify and review meta
description that are either too long or too short.
Step 2: Too long or too short should be amended, rewritten with care with a descriptive and compelling approach.
Page description is not compelling or written in a passive voice that don’t encourage users to click.
Step 1: Review top pages on your website, such as category and product pages.
Step 2: Amend and write more compelling meta
descriptions to encourage organic CTR.
Web page load speed is part of the Google search ranking algorithm. For usability reasons, best practices dictate that a web page should load within 1-2 seconds on a typical connection. However, according to Search Console data, a load time of 1.4 seconds is the threshold between a fast web page and a slow web page. That means, ideally, that every web page on your website should load in 1.4 seconds or less, to receive the maximum SEO benefit for fast-loading web pages.
Besides the ranking benefits, there are crawling benefits for faster sites. As discussed in the crawl section above, each site is assigned a crawl budget. When search spiders have exhausted that budget, they move on. There is a strong correlation between improving web page speed and the search engines’ ability to crawl more web pages.
Google gathers web page load time data through actual user experience data collected with the Google Toolbar, Chrome, and Android and may also be combining that data with data collected as Google crawls a website. As such, web page load speed in terms of the ranking algorithm is being measured using the total load time for a web page, exactly as a user would experience it.
Common site speed problems and how to deal with them:
Top landing web pages have high page load times.
Optimize the page load time for your websites top 25 landing web pages.
Competitors’ website load time is faster than your website and affects your ranking on Google.
To be competitive and rank higher, page load time (especially for mobile) need to be kept under 20 HTTP requests.
Slow server response, especially Time to First Byte.
Review server response time and aim for a time to first byte of 500ms or less.
Website assets are transferred over the H2 or HTTP/2.0 transfer protocol apart from other assets that are that are sourced from IIS, Apache and nginx servers using the HTTP/1.1 protocol to transfer data. This transfer protocol can be prone to delays due to sequenced fetching of page resources.
Use the most up-to-date transfer protocol to ensure fast site speed.
Note. Work with external providers to see if these organisations can transfer assets to their client’s sites more efficiently by upgrading their transfer protocols.
There are assets that are not being cached. These assets must be downloaded again upon each page request by both bots and users, causing a prolonged page load.
Step 1: Review HTML caching rules.
Step 2: Improve caching rules for HTML for optimal user-experience.
Note. The performance of websites and applications can be significantly improved by re-using previously fetched resources through HTTP caching.
Search engines require that your website is mobile ready. There are a few key facts marketers and site owners should know about mobile friendly website:
Completely apart from Google’s Mobile Friendly Update, sites should strive to serve up pages that are friendly to mobile users. If mobile visitors have a difficult time digesting your content, they will oftentimes grow frustrated by the poor user experience and leave your site.
This will not only result is a reduction of traffic (as search engines could send less mobile traffic to your site) but also fewer social shares. All of these consequences have the potential of hitting an organization’s bottom line. Therefore, it is in the best interest of most site owners to make their sites into compliance.
Common mobile website problems and how to deal with them:
Mobile visitors are becoming increasingly intolerant of slow websites. Lag time can cost a website in lost revenue and trust.
Consider AMP for the following types of content: News articles, Blog posts, Recipes, Product listings, Travel guides and the like.
Consider implementing an AMP strategy for the homepage, category pages, product listing pages and blog posts.
Note. Although we typically think of AMP for publisher sites, it can also be successfully implemented on ecommerce sites with some caveats.
Caution. AMP is not without implementation issues.
Google differentiates between multilingual and multi-regional sites. A multilingual website is a website that offers content in more than one language, whereas a multi-regional website is one that explicitly targets users in different countries. Some sites are both multi-regional and multilingual (for example, a site with different versions for the US and Canada, and both French and English versions of the Canadian content).
Expanding a website to cover multiple countries and/or languages can be fraught with challenges. Because you have multiple versions of your site, any issues will be magnified.
Common internationalization problems and how to deal with them:
If your website is multilingual, search engines use the content language of your page to determine its language, not lang attributes. Your website has mixed translation issues or have only translated the boilerplate text of your web pages, while keeping your content in a single language. This can create a bad user experience if the same content appears multiple times in search results with various boilerplate languages.
Ensure that each multilingual version of your website is translated fully to its specified language.
Web pages are missing hreflang
return links.
Step 1: Identify and review hreflang
tag errors report.
Step 2: Ensure all returned hreflang
links match the canonical link of the page.
Note. hreflang
is fragile at the best of times, keeping a key on the hreflang
tags with errors report is recommended on a daily basis.
— Technical SEO Audit document 2018, Miklagárd SEO Team
Miklagárd can support your website with the implementation of audits, we offer in-house training, consulting and hand-holding where necessary.
Our phone support is open Monday to Friday from 9.00-16.00
Webshop and eCommerce stores have been booming worldwide in 2020, and especially in Denmark. Many factors have been at play to drive this digital growth. However, the most prominent might…
Data sharing pact between the EU and the US has been invalidated by the European Court of Justice. The ruling was based on the inadequate protection of European citizens’ data…
Google Meet used to be paid services for businesses only. Now as of June 2020, it has become free for all Gmail users. The change has come after Zoom usage…