A Complete Guide to Technical Audit in SEO 

By the end of this tutorial, you will be able to work and understand technical parts of the website and how to optimize it so, that a website can work better and get higher ranks in search engine systems

Particularly these points:

  1. What technical optimization is, and why do you need this?
  2. Tool to check out the technical state of a website 
  3. Working with AMP pages 
  4. Key Elements during tech audit what to pay attention to?   
  5. Pitfall
  6. Widespread questions during SEO tech audit (FAQ’s)

What is Technical Website Optimization?

webiste health

Technical website optimization is the process of fixing and adjusting website issues, like page speed, code bugs, index problems, inner required files like sitemap.xml, robots.txt, and so on so forth to improve so-called “website health

Eventually keeping the website fast, convenient, and useful for users and also attractive to search engine systems like Google, which causes better rankings in these systems.

For example, if a page wouldn’t load fast enough for users

Let’s say people are waiting for content for 5 seconds just to see the first frames of the page

That is definitely a problem

People will leave your website to competitors for seeking answers there

Another example of a tech problem is that the specific line of code can literally block your website from indexation in Google, so no matter what you do, it won’t go further in acquiring traffic from searches.

This is why SEO specialists insist on doing the technical part at first, and then slightly move to other parts of the huge SEO field.

And of course, there are more problems and pitfalls that I’ve mentioned before. In this guideline, I will show you the most widespread problems and what to do with them with real examples included.

Tool to Check Out Technical State of a Website

We know there could be a bunch of technical problems with a website

But where to look first?

This is where the Screaming Frog tool comes in handy!

Screaming frog

One of the most powerful tool I have been using for my entire career

This tool helped me a lot during my work experience. Basically, you can check whatever you want to check on the website in terms of technical stability and other things

All you need to do is to download it (by the way, per 500 pages are free to analyze) and enter your website name inside the search bar

So if you have a small website it could be a free tool forever

When it’s done, you need to know where to look, right? Like what exactly do I need to check first and prioritize to get my website to feel better

I have prioritized it for you, so the things that I have mentioned below would be more than enough to get what you want

Client error (4xx) problem

Probably the most common issue shows several pages that won’t work (status code 404, the server cannot respond)

So people can just leave a website because of that error.

Simply imagine that you are looking for a book on a shelf, and It’s not there, you will probably look for that book somewhere else

The same with 404 code pages, your not-workable page isn’t what people are looking for, so they may leave it and find another one on a competitor’s website, eventually – you can lose your clients

What to do

Step #1

Find all 404 pages from inside the screaming frog panel

You can do that by exporting these links

Simply go to the 404 reports from the right sidebar

Then select all the links from this tab above

And click “Export” button down below

Now you have the entire list of broken links with their sources

Step #2

Repair these pages and make them work since you know where are they coming from, you can fix them easily

If you have no clue what to do with them, redirect them to the home page

It’s better to keep it redirected, rather than leave it as 404 pages

All set, now you have avoided 404 pages, it is time to move forward

Tip

Don’t forget to erase these broken links from your sitemap.xml file to prevent it from crawling attempt by search engine systems

The second common issue is in the robots.txt file

robots.txt

The robots.txt file is a set of rules that are responsible for providing crawling access to bots like GoogleBot, mostly used for not overloading requests from the bot to the website.

By doing wrong access permissions inside it, you can screw it all

by blocking some pages that shouldn’t be blocked.

Let’s say you have a category where you sell electric kettles

But somehow it appears that the entire category has been banned

Basically, you will observe something like this, inside your robots.txt file

Disallow: /electric-kettles/

Where the /electric-kettles/ is the page where you sell electric kettles

To prevent this from happening, it’s better to check the robots.txt file

Simply go to the “Blocked by robots.txt” report in screaming frog

Here, you will see the entire list of pages that have been blocked from crawling

Now you know where to look for a problem

Widespread easy-to-catch problems

1. The entire website has been banned, contains disallow:/ rule inside robots.txt
2. Certain page has been banned, for example, if you sell scooters, you have https://yourwebsite.com/scooters/ page, if you’re seeing this rule disallow: /scooters inside your robots.txt, you probably need to delete that, because it can provide problems with crawling during the crawl process.

Tip

By the way the robots.txt rules are not strictly directed to a website to block the indexation in Google.

Mostly, we use robots.txt to avoid overloading website request from bot to keep the crawling budget still 

For keeping website or a page blocked, it is better to use noindex tag or protect your website with a password (or a private page, how it’s done in WordPress) 

Anyway, practice is showing that the blocking pages inside the robots.txt file, could be a reason for Google to block a particular page/pages

Two or more variations of a website (protocols)

For example, http:// and https:// protocols – you can find the presence of it inside the screaming frog tab “Security”

What does it mean? 

Basically, when you have both http:// and https:// protocols on your website

You need at least leave only one version of it, and this should be an https:// because since Google last updates at this point, it will restrict showing the website under http:// protocol so your website can be marked as a non-safety resource, and people would not be able to enter it

You can use the rule inside the .htaccess file, that tells that every HTTP protocol page should be redirected to the HTTPS version of it.

RewriteEngine onnRewriteCond %{HTTP:X-Forwarded-Proto} !httpsnRewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301,NE]n

This rule will help every single webpage that could accidentally become an HTTP protocol automatically redirect it to HTTPS protocol, and leave only 1 version.

So you don’t have to re-check this error over and over again   

Meta-tags Duplication

The meta-duplication includes:

  1. Title
  2. Description
  3. And H1 tag

You can find these, after crawling your website in the screaming frog right panel

What do I do if I found one of these duplicates?

All you can do is change these page data like title, description, and H1 and make them unique from each other.

It is better to keep up page credentials unique from each other. The title, description, and H1 tag should be optimized, if not – it can cause a situation where Google can remove one of your pages considering it as a duplicated one.

You can do this type of work manually

But what if you have more than a hundred pages?

If you are using WordPress, it comes to rescue you, because it has a very useful plugin

The Yoast SEO plugin helps you to set up meta-tags by its templates on every page, without going to every page and making it manually, you just select the values for each meta-tag and that’s it.

source: yoast.com

If you are new to that, please check out this guideline about Yoast configuration

But wait a second, there is a pitfall out there

For example, if you got let’s say an online shop and a product listing on numeric pages like page 1, page 2 e.t.c until Page № whatever

You don’t have to change its titles and descriptions and call them like Shoe page 1, Shoe page 2, Shoe page 3 e.t.c

Instead, you can use the Canonical tag to tell Google that those pages aren’t duplicated.

Canonical Tags

A canonical tag in a nutshell – is a reference to Google, that points to and defines the primary page among the pages with the same content on it, mostly paginated pages (numeric ones)

You can check out canonical availability over here under the “Contains canonical” header inside Screaming Frog

For example, if you have a shoe store, you sell shoes.

There are 25 pages of the same content shoes but with different models of shoes listed and separated by 3 pages, shoes/page-1/, /shoes/page-2/, /shoes/page-3/. Having the same Title and Description – is okay.

You don’t need to set up title and description meta tags on each page with the almost same meaning

Instead, you give a reference to Google, saying that you have 1 primary page which is /shoes/, and extra (paginated) pages like /shoes/page-1, /shoes/page-2, /shoes/page-3 e.t.c

So these numbered pages should contain the following tag inside the <head> section

<link rel="canonical" href="https://test.com/shoes/" />

Where “href=” should contain a link to your primary page of the given category, which is /your-shoe-page/ 

Put it simply.
You tell Google that you have numbered pages with pretty much the same content on them, and you want to alert Google about it, to avoid content duplication problems 

When you complete this, Googe will consider those paginated (numbered) pages as an addition to the parent category page (in our case, /shoe-page/

The next common issue type would be

Noindex Tags

The noindex tag allows preventing a page from indexing in Google, basically just blocking it.

You can find this report under the “Directives” header inside ScreamingFrog

So if you found a page in the Noindex tab that actually should be indexed – find this page and delete the Noindex tag on the page itself, and that’s it.

You can read about Noindex over here in Google’s official documentation  

Tip

Usually SEO specialists are restricting to index paginated (numbered) pages, and instead of doing it properly via Canonical tag, they just put those pages under Noindex rule, which is basically won’t allow Google to crawl additional helpful content on these pages.

It is better to block only those pages that surely should be banned from index

And last but not least important thing is that you can check in Screaming Frog during a technical audit for a website

Hreflang Tag

Mostly, hreflang tag is used to control content indexation when your website is international and has many versions of content in other languages or served locations

source: ahrefs.com

The Hreflang tag tells Google, that a website has the same content in different languages or served regions that are important for people

And to avoid duplication by Google by its meta-tags/content and structure, you need to set up this tag properly for each version of a page

You can go to the official Google hreflang documentation and read about examples and methods during the configuration

Well while you’re reading it, I can say that the Hreflang tag is one of the most powerful things from the point of marketing during technical optimization of the website

Because you can point out to Google what page is appropriate to show to people from different regions or having different languages, or even a different currency

In most cases, Google provides it automatically and may do that randomly and not precisely, so you need to tell him about your pages

For example, you got 2 pages in English talking about shipping service, from the US to the UK and vice-versa

Same language, but the content is a bit different, the shipping fees, and its unique currency, are in US dollars and UK pounds, also the measurement system is a bit different. 

You can show both pages by pointing Google to prefer to chose what is best for users from the US and what is better for users from the UK

Instead of hoping that Google will choose the right page, now you can get control over it and manipulate it the way you want to

What to do
Simply place this line of code

<link rel="alternate" hreflang="lang_code" href="url_of_page" />

Inside the <head> section of a given page

For example, you tell Google that one of the UK-based pages https://yourwebsite.com/shipping-to-us contains information about shipping from the UK to the US 

Use this tag below to let Google show content for people from the UK with proper information about currency/delivery time and measurement units

<link rel="alternate" hreflang="en_US" href=".../shipping-to-us" />

And use this hreflang tag

<link rel="alternate" hreflang="en_GB" href=".../shipping-to-uk" />

To show people from the US the same content direction but with different data, like currency and measurement units 

When you’re done, re-check again and make sure you haven’t mixed it up 

What’s Left Out of the Picture in ScreamingFrog Features

Crawl Analysis Tool

The crawl analysis tool can help you to collect the data about:

  • AMP pages
  • Search console problems 
  • Sitemaps problems
  • Content problems 
  • And much more…

You can use it right inside the screaming frog tool

So you can either work with this information and also export, creating convenient reports for a client and much more. 

All you need to do is to fully crawl a website first

Then configure the crawl analysis tool

And click analyze again, at the same sidebar to the right you will find out these analyzed problems in the appropriate categories

Sitemap Export Feature

This tool is located in the main menu of screaming frog

This tool is a very useful one

That helps to create a new sitemap with all configurations you want in 1 click.

All you need to do is to crawl the website, then click apply, and the ready-to-go sitemap is in your hands and can be used for the website 

API Accesses

This tool can be useful if you want to use a bulk analysis of your website in the following software tools:

  • Google Analytics 
  • Search Console tool 
  • Page speed Insights (google page speed) 
  • Majestic
  • Ahrefs
  • Moz 

If you are using one of those tools, it could be very helpful to save your time during the audit, because the error list would appear right on the screen for every page that you wanted to check out using these tools

So you don’t have to go to its web versions and check the same pages one at a time in each tool

Instead, you connect via API’s and then can check the same information immediately right inside the screaming frog bars

Then you can also export it, and use it further for presentations or as a report for your client.

So for now

Those were the most common and one of the most important problems that you can face during technical website optimization.

Of course, besides when it’s completely lost and not working at all

And surely there is more to see, particularly inside the ScreamingFrog tool,

But we are going to go further and step back a little from it, keep following the goal – make as good technical optimization as possible

Wasting precious time on the most important things brings the most performance, and not killing time just because it needed to be done perfectly.

But for that, we need to consider other tools, like Google Search Console (GSC), AMP validator tool, and structured data testing tools

We’ll start with the first powerful and useful technology called AMP 

Working With AMP Pages (skip this step if no AMP pages)

AMP (Accelerated Mobile Pages) is a technology that allows instantly loading a page with much less time compared to the normal version of it. 

Reminder: Make sure you have installed Google Search Console (GSC) tool on your website before working with it.

If not, you can use my recent GSC tutorial to install it manually on a website, or if you are using WordPress CMS, you can use the Google Search Console plugin to install it automatically.

When it’s done, you have to wait at least 24 hours to let Google write the data inside this tool

Your particular project may have not contained AMP pages, but imagine it does. 

How do you track if is there any AMP problems? 

Simply go to the Google Search Console, if there are AMP pages, you will be able to see this section called “AMP

When you clicked it, you will find out many problems (if you’re having some), but here I want to give you a guideline about what I had struggled with, and probably you will

AMP Validation Errors

Before going to check out for AMP page problems, we need to know about its tools, I have been using these 2 to see what exact problems an AMP page have 

AMP Validator Tool

Is a software that identifies a problem on AMP pages, basically what you need to do is to add the issued URL and the program will show exactly what to do

AMP Validator Extension

The same tool, but given as an extension

Go to your AMP page, and hover the mouse over the extension icon, if the page doesn’t contain any problems, you will see the green icon with the text “Valid AMP” if not – the red icon with listed problems

Also: you need to know before we go
There are 2 error types: critical and non-critical

In the first case, the AMP page couldn’t be indexed and ranked in Google, while the second type still can be indexed, but prevent it from being enhanced in search results. 

So pay attention to critical errors first and then slightly go to non-critical ones.

Since we know about AMP tools, now it is time to talk about common AMP validation errors and how to handle it

In this tutorial, I will show you logic and AMP problem patterns neither than the entire problem list and how to solve them, because most of them I haven’t been able to face myself, so cannot recommend you anything at this point

First things first

URL not found (404)

url not found 404 amp page

URL not found in AMP pages – the most common problem, basically means that an AMP page has not been found on the website, can be caused if the page itself was removed or the link <link rel=”amphtml”> doesn’t contain a valid <href> attribute to AMP page

What to do:

1. Set a proper link to the non-AMP page

<link rel="amphtml" href="https://your-website.com/proper-amp-page">

2. And a proper link to the AMP page

<link rel="canonical" href="https://your-website.com/proper-non-amp-page">

For example, let’s say you have both pages (AMP and non-AMP) pages where you sell scooters 

  1. Non-amp version page URL address would be /scooters-buy/
  2. AMP version page URL address would be /scooters-buy/amp/

The following tag goes to the non-AMP version page

<link rel="amphtml" href="https://your-website.com/scooters-buy/amp">

The other one goes to the AMP version page 

<link rel="canonical" href="https://your-website.com/scooters-buy">

This is how you link those pages between each other 

And now Google can crawl both versions and show them separately as an AMP and non-AMP version

And Bob’s your uncle 🙂

Tip

If you have only 1 page which is an AMP version, add the link, linking to itself

See more about it 

Malformed URL found for attribute ‘href’ in tag ‘a’ errors

malformed url found for attribute "href" in tag "a"

Basically, it means that something went wrong in terms of symbol correctness inside the <a href> tag on the page. 

Most errors that you will be facing, would contain things like that, typos or incorrect characters, it is not a big deal. 

What to do

Go to the AMP validator tool – https://validator.ampproject.org/ put a problem page inside the search bar, and then the validator will show you, what exactly you have missed inside the given page

For example, I have found this page https://cointelegraph.com/news/major-grayscale-digital-currency-funds-are-trading-at-34-to-69-discount-to-nav/amp with the given problem

I put this problem link inside the AMP validator tool 

And I see the following code error

'http://designed%20to%20prevent%20fraudulent%20and%20manipulative%20acts%20and%20practices.' for attribute 'href' in tag 'a'.n

Now I know where the problem is hiding from me

And I can change the URL inside the href tag where the tool is pointing to a problem

I put a proper URL (in this case it would be an exact article that the company wanted to link to)

And that’s it, the error is gone

Now, the AMP page could participate in ranking again. 

Simple

Missing URL in HTML tag “amp-img” errors

missing url in html tag amp-img

This problem appears because the amp-img src tag doesn’t contain the link itself.

Basically, the amp-img tag is a good replacement for the HTML5 img tag on AMP pages 

What to do 

Go to the same AMP validator tool, and it will show you the image that has missed the src tag

Put the image inside that src tag, at the end of this point the image should be inside the src tag

Now you can re-index the page inside Google Search Console, wait until it renews it, and that is it.

If you want to see more about AMP HTML specifications, go here – https://amp.dev/documentation/guides-and-tutorials/websites/learn/spec/amphtml?redirected 

And there are more errors that you can face and they are pretty much the same

My mission here is to show you a behavior pattern during AMP configuration, so now you know what tools to use and where to look

if you have faced an AMP problem that is not on this list, please take a look at this AMP official source

Google Search Console (GSC) Tool

google search console tool

Free software tool by Google, that contains useful come in handy tools helping to identify a website’s positive points along with negatives:

Reminder: Make sure you have installed Google Search Console (GSC) tool on your website before working with it.

If not, you can use this official tutorial to install it manually on the website, or if you are using WordPress CMS, you can use the Google Search Console plugin to install it automatically.

When it’s done, you have to wait at least 24 hours to let Google write the data inside this tool

Indexing Tool

Allows to check page availability.

For example, you can observe how many pages are not in the index, and how many of them into the index right now.

What problems they’re facing and also what to do to avoid them.

As well, this tool shows a video and sitemap indexation problems.

Moreover, you can delete any page you want from Google Index manually right inside the tool.

So if you are working with videos, could be very handy

For example, I want to check how many pages are not in the Google index, and what I can do to make them be in

And then I see what exactly pages are broken right now.

Experience Tool 

Helps to identify page experience during user sessions on a website

For example, you can find how many pages are having a good experience and how many are not, for both: mobile and desktop devices.

Also, this tool provides Core Web Vitals The Core Web Vitals report shows how your pages perform, based on real-world usage data (sometimes called field data) signals and how to improve it to make pages load faster and more convenient for people

Moreover, there is a mobile usability checker that helps to highlight good page experiences and unusable pages that affects user experience badly

So you can definitely work with this tool and fix pages to make them work for people positively

For example, I want to check how many pages on my website are attractive to people and how many aren’t

Then you can click “Core Web Vitals” and see what exactly you can do, to make these pages better optimized

Or even see how many pages on my website are not usable for mobile devices, and their exact issues

Enhancements Tool

This tool allows adjusting complementary elements of the website, that directly affect your website performance in the featured snippets The featured snippet <strong>is excerpted content, </strong>taken from a website and placed at the top of the search results<strong>,</strong> to<strong> provide a quick answer</strong> without going to the source (website). Featured snippets are also called position #0. And only Google algorithms decide from what website and what exactly to place as a featured snippet.  or rich results, you can read about featured snippets and ways of optimizing it in my Featured Snippet Guideline

For example, you can check out your structured data Structured data is a standardized format for providing information about a page and classifying the page content; for example, on a recipe page, what are the ingredients, the cooking time and temperature, the calories, and so on., particularly “Breadcrumbs” right inside the tool, if there is an error, you will see the list of pages containing a problem and what exactly to do with it 

Invalid pages

Exact issue description

Allows you to see all external and internal links from your website and to your website from other sources for free, also defines the top linking text “Anchor” Anchor text is the visible, clickable text in a hyperlink that those external links containing to your website

This tool is a very useful one

For example, if you want to check out what websites are linking to you and what anchors they use

For further tracking and optimizing these links, building up a link strategy

Or you can use it to find bad and spammy links and rapidly block them from indexing by using Google Disavow Tool

Those tools above are extremely helpful during website audits, but that’s not everything that Google Search Console has.

Now, I want to talk and share with you about the key elements during the technical optimization of a website, since no tool can actually tell you a ratio between what is required and what is not.

Key Elements During Tech Audit What to Pay Attention To? 

During a website audit, it’s indeed important to use the most powerful and new tools, stick to the guidelines, checking every single “mistake” that can happen

But the key elements aren’t that hatched.

The key element of a technical audit – is to fix the most important issues that actually influence a website, and fix the rest of them during the working process.

So the website can just work fine for people, in most cases, and boost up new customers with a low ratio of effort to time spent

You can literally define these key elements, knowing nothing about SEO

But it takes time to understand it during your journey

So I want to reinforce what we’ve learned today, and what has been helping me during my SEO work experience with these key elements

Key Element #1 – Turn on the user vision

enter a user vision

Go through your website yourself and try to notice every single problem that you suffered during the way (a.k.a Q/A engineering

For example, you’ve tried to open up a mobile menu using a mobile device, and then suddenly you can’t just close the pop-up menu, because there is no option for that

You will be surprised, but speaking for myself, I have faced dozens of problems like this one.

People just couldn’t enter a website menu because it just won’t work, and they leave.

Sounds silly, and most tools can’t mention that, and somehow notice you

But it’s getting worse for people who are using your website, in Google’s eyes that’s a bad factor.

So firstly, try to be a user yourself and go from start to finish, you will probably find a lot of interesting key moments that your users might be struggling with.

Or hire a Q/A department in case you are a billionaire 🙂

Key Element #2 – Do not play with Google Page Speed or things like that too much

I often hear, and unfortunately have been facing the facts, that people from the SEO industry are incredibly focused on things like Google page speed. 

Well, we need to clarify that tools such that one is not the only thing that matters during SEO, and It’s not even a main point, and only focusing on a superfast appearing page can bring you almost nothing if your content is low.

It is better to spend a bit of time on the technical part, optimizing it just okay, covering the main problems, and the rest of this time creating useful and convenient content for people

As proof of that, you can actually go and get websites that are having a high rank in Google, let’s say top 1-3, and scan it using the Page Speed Tool

You will definitely find websites like this one down below, with low page speed scores but high ranks because its content is helping people solve their problems 

And I think that is okay if you work hard enough in many fields like link building, content creation, EAT factors, then tech stability – it does worth it and eventually will give you good ranks in Google.

I think you get a point – Consistent and comprehensive work thinking about users and how you can improve that is the only thing that actually works well no matter what Google Update you faced

Key Element #3 – Don’t listen too much to what other says, including Google

The only thing Google actually does during its lifetime and which I completely understand is – making people’s problems solved, through the content expertise and experience that you’re giving to them

So ask yourself during any related SEO work, including tech audit, does it solve any user problem? And if yes, how exactly? Try to find any proof of that, do research yourself.

For example 

You know that your Google Page speed is way bad and people just leave your website because they can’t wait 10 seconds to see the content 

You understand that it is necessary to be in a green zone, and it’s not necessary to improve your page speed time from 1.5 sec to 1.2 sec to load it. Because it doesn’t make such a difference for people.

Furthermore, you know now, that the main point of your website loading for 10 seconds, is that the banner on every page is way too heavy, instead of 350kb size it has a 10MB picture size in good quality, and Page Speed Tool is shouting it out to you with a scary bolded red alerts  

So you see the logical correlation here:

Optimize banner, by making its size less – page speed will download faster – more people will be able to see that page, without leaving it before it appears – less bounce rate – more visitors – more conversions

All main metrics have been just blown up just because you did a banner okay, compared to the moment when your banner couldn’t load to let people see your content. 

The key point here is to prioritize tasks based on real problems

TL;DR Step By Step Website Technical Optimization, Complected In Easy-To-Catch Way

I have prepared the recently discussed things whole in 1 place in easy to catch way, without examples

You can start with the first one and then move step-by-step to the last one, making sure you are not missing anything

  1. Download screaming frog, put the website link inside the crawl bar, and then press start 
  2. When it’s done, make sure to clickCrawl Analyze”, choose all options for clarity, then start to analyze.  
  3. When it’s done, take a look at these main points to check as a high priority

    3.1. Make sure your robots.txt isn’t blocking pages that should be in Google

    3.2. Make sure pages that should be indexed are not blocked by <noindex> tag

    3.3. Make sure there are no 404 errors, if so, handle them first

    3.4. Make sure your website is working only with 1 protocol – https:// if no, configure 301 redirects to the https protocol

    3.5. Make sure that there are no duplicates in meta-tags such as title and description, if so, change to proper ones and re-check it again.
     
    3.6. If you have paginated pages (numbered pages) make sure to set up proper canonical tags for its parents

    3.7. If you work with multi-regional or multilingual websites, make sure that you set proper hrelfang tags 
  1. When you have finished with screaming frog, you can use the Google Search Console tool to find the rest of the website problems, particularly pay attention to:

    4.1. Make sure to have as many pages in the index as you can, go to the indexing report and find out what exactly to do with non-indexed pages

    4.2. Use the experience tool to identify pages that might contain Core Web Vitals problems, if you found any, follow the given instructions from Google, right inside where you see a problem

    4.3. Use enhancement tool to identify any complementary elements, such as problems with structured data, particularly with breadcrumbs or video snippets e.t.c

    4.4. Check the link tool to make sure that you don’t have spammy or bad links from the website you don’t and won’t index there, if you found any, use Disavow tool to avoid those links 
  2. Working with AMP pages (for only websites with AMP technology)

    5.1. Open up Google Search Console Tool and find the AMP report inside it

    5.2. Find all the issued pages, and make sure to fix them from high priority (critical issued pages) to low priority (non-critical pages)

    5.3. Use this AMP validator tool to check every issued page you found from the previous step, so you will be able to see what exactly to do with them. Most errors appear because tags contain wrong URL’s as links or there are spelling mistakes, so it would be easy to fix, but anyway, it’s better to pass the working with AMP pages sector.

Pitfall

During the technical audit, you can stumble upon many problems that at first glance not easy to solve, and it will definitely interfere and take a lot of a time

I’ve prepared one of the pitfalls that I faced back then, and I will help you to save time if you have faced the same problem

Screaming Frog isn’t working properly (crawling incorrectly)

Basically, when you crawled the website, the screaming frog report seems a bit liar, shows you 404 pages where it’s actually 200, and they are okay, or even shows the 500 redirects, where the page actually works fine and instead of

What does it mean?

Basically, it means that the website has some sort of protection on its server part, such as CloudFlare that won’t allow you to get a website or crawl it properly 

What to do

It’s not appropriate to always delete the service maintenance on your website, it can cause a bad security effect on a website

So if you found this sort of protection, add the IP address to the white list of a program

For example, CloudFlare does have these configurations inside the Firewall App category and if you add a particular address inside the whitelist, it will allow you to crawl the website, particularly from a pointed IP address

 And eventually, the information you get, using a screaming frog, will be correct.

Notes: 

If you are using Ahrefs as a crawler, add these IP addresses – https://help.ahrefs.com/en/articles/78658-what-is-the-list-of-your-ip-ranges into the whitelist section of the app

If you are using SEMrush as a crawler, add these IP addresses – https://www.semrush.com/kb/950-troubleshooting-content-analyzer#whitelist-the-bots into the whitelist section of the app

If you are using ScreamingFrog, you better buy a proxy server, then set up it inside the Configuration – System – Proxy in ScreamingFrog, then reload the program to apply the changes

Basically, if your IP is dynamic and changes over time, the proxy is static, and you don’t have to go back again and add a new IP address inside the whitelists

FAQ’s

How Much Should A Technical SEO Audit Cost?

Basically, technical audit is a part of SEO work itself, and usually doesn’t go separately with the rest of the work. Anyway the cost range could vary from $100 to $1000 depending on the agency you work with, SEO specialist qualification and the project size 

Why Should You Ask For A Technical SEO Audit?

Because a website technical statement is one of the most important things to set up as a good foundation, imagine a house without stable foundation, the same thing here in SEO

How To Present Audit Findings To a non-technical Audience?

You can prepare the document for both: technical specialist and non-technical audience, by simply putting it in a nutshell, for client first, and then for a tech guys
For example, you can say “insert this canonical tag to a category pages, using the canonical rel link
Or you can say “a canonical tag helps to identify the main page of the given category page, that helps Google to understand what to crawl and how. Which leads us to getting more organic traffic from Google, this is why we need to optimize it, put this code right here…” 

Just rephrase any technical terms into the human language, highlighting the values 

Does Technical SEO Work?

Yes it does. Along with other edits, eventually you get better website performance.

How To Learn Technical SEO

You can learn it by practicing and testing new things you didn’t do before. Practice makes perfect 

Why Technical SEO is Important

Because technical audit is influencing straight on how convenient people are consuming a website content and how crawlers like Google understand it and eventually ranking website pages based on inner metrics

When Is A Technical Evaluation Required

Basically, if you have a website and ready to sell your product/service, is better start to pay attention to technical audition

Audit Technical Skills

Basic HTML, CSS knowledge along with JS and particular CMS you are working with like: WordPress, Joomla, Shopify e.t.c

That was it, I hope you enjoyed the content, it took a lot of effort to create this guide

You can help me to spread the message by sharing it to your social media, I would be glad if you do that. Also, subscribe to my newsletter in case not to miss any information that can help you to grow up in SEO

Leave a Reply

Your email address will not be published. Required fields are marked *


Comments