This post discusses the Technical SEO course as part of Track 5 (of 7) for the Growth Marketing program from the CXL Institute. The course is taught by Marketing VP Martijn Scheijbeler. It covers the foundations of SEO-based site structure, URLs, indexing, web crawlers, sitemaps, site speed, structured data, and how to conduct a site SEO audit.
Scheijbeler does a solid job of introducing the building blocks of technical SEO, without ever falling into the trap of deep-diving. Along that same line, I have provided an overview of the course structure and key insights, but in most sections I provide additional external resources and videos rather than attempting to unravel the technical SEO universe.
The Basics of Technical SEO.
SEO is a massive topic, but technical SEO is best thought of as the foundational piece that supports all other SEO activity. For a deeper but still general overview, check out Backlinko’s Technical SEO guide.
Scheijbeler rightfully addresses that while technical SEO is important, proportionality is key. He suggests sites with 50K+ pages (with lots of automatically generated pages where technical issues can cripple business operations) make sense to deploy dedicated technical SEO resources.
For smaller teams and companies just starting out, he recommends performing a technical SEO site audit using ScreamingFrog. Other tools include SiteBulb, Botify, and Deepcrawl. He also suggests setting up access to Google Search Console and Bing Webmaster Tools. I use both of these regularly – they’re great for ongoing updates and management, especially with limited resources.
The Head & META Tags.
The course begins with the absolute basics of HTML page structure. It seems obvious, but I’m always surprised to learn how many marketers shy away from any form of code. Not much time is spent here other than to explain that the <head> is where the most options exist to impact SEO without actually changing the displayed page itself.
Scheijbeler provides a quick summary of what type of meta tags are commonly deployed within the head:
- Description – 140-160 characters (what shows up in the search results).
- Keywords – No longer widely used.
- Robots – Index or no index, follow or no follow, as well as specific bot instructions (GoogleBot, BingBot, etc.)
- Canonical – Tells bots if this is the actual original version of the page or references to the canonical source.
- Open Graph & Twitter Cards – Social pulls where you can decide what is displayed vs. what the sites grab automatically (especially the images).
- HREFlang – International SEO that displays alternative URLs for similar or different languages to that page.
The Body & Content Tags.
On-page SEO (body and content tags including headings, styling, paragraphs, images, links, etc.) are a heavily covered topic – see the Moz on-page SEO guide for more depth.
For me, the key thing to remember is that these tags exist for search engines. That is to say, we could visually design pages to our liking using CSS while completely ignoring SEO content tags. But doing so would make the site all but invisible to search engines.
Headline and body tags are not justreference points for design! Everyone on the marketing team should think of the tags as search-only mechanisms to ensure discipline. (I’ve seen pages with 10 or more H1 tags, which makes Google bots cry.)
URL Structures and Indexing.
Scheijbeler goes somewhat in-depth about how to create a good URL structure. He discusses options for selecting appropriate top-level domains and structuring URLs using folders vs. more technical parameters. He suggests when in doubt, defaulting to a visually clean folder structure (write it down and see what it looks like), with keyword research driving much of your thinking.
For more technical detail see the video below.
Robots.txt Files.
Robot.txt files are another potential technical SEO rabbit hole, but Scheijbeler does a solid job of covering the key elements. He goes into how to identify which rules apply to specific user agents, the type of markup to use, and how to block out certain things like your site’s dynamic pages. He provides a sample Robot.txt file for reference.
A great caution mentioned in the course is that people often proceed blindly with robots.txt changes because they don’t result in changes to the visual layout. But robot missteps on larger sites can be catastrophic. Scheijbeler recommends using Google Search Console to test any potential changes to robots.txt files before deploying.
In addition to the video below, have a look at Yoast’s robot.txt guide or the Google Robot.txt reference guide for more detail on robots.
Crawl Behaviour and Crawlability.
This is a deep, deep topic. SEMrush’s crawlability guide is a good place to spend the day if this is your cup of tea.
The course here establishes the transition from crawling to indexing (Google being able to efficiently establish which content is of value) – you want to provide the cleanest way to crawl (managing the crawl budget). It goes into the best approach for using Google Search Console to look for errors and fix them.
Scheijbeler spends some time covering the different status codes (an area of constant confusion for many). In lamens terms, 200s are ‘ok’ status signals, 300s are redirecting, 400s are forbidden or gone, and 500s are server breakage or maintenance.
Honestly, unless you’re on the dev team, don’t worry about status codes. Read up on MOZ’s Status Codes guide if you’re keen on going further.
Sitemaps.
Because sitemaps seem conceptually easier to grasp than other more abstract elements of technical SEO, they can often be glanced-over or improperly deployed. A few key recommendations from the course:
- XML sitemaps are the easiest solution and are automatically generated.
- Create a news sitemap and submit to Google News to instantly share new content when published.
- Ping search engines to alert them of new content
https://bing.com/ping?sitemap=){URL to sitemap}
https://bing.com/ping?sitemap=){URL to sitemap}
Scheijbeler gets into very technical elements including sitemap compression. Again, unless you’re on the dev team, leave this depth alone (compression can cause other issues unless you know what you’re doing).
Read Dynomapper’s Sitemap guide for further learning.
Structured Data Markup.
Candidly, structured data (and how to use schemas) is an area of the course where Scheijbeler doesn’t do a very good job of explaining conceptually. To be sure, it’s a complex topic, and I don’t know any human being not confused by Schema.org (open source, developed by Google, Microsoft, etc. to standardize markup).
Ultimately, structured data is the supposed answer to assist search engines in really understanding what your content is about – you give the search engines a better idea by ‘marking up your content’ to explain what it is and how it relates to other content. Confused? Welcome to the club. For the basics, watch the video below.
Key recommendations from the course:
- For Schema.org integrations use JSON-LD.
- You can get most recurring Schema.org snippets from Steal Our JSON.
- To implement, use Google Tag Manager rather than manually.
- To validate, test in Google’s Structured Data Testing Tool.
Improving Site and Page Speed.
If you have the reasonable technical expertise you can invest time in site and page speed optimization in detail. For the average marketer, I don’t recommend it – caching, server requests, compression, etc. can actually create larger problems if you don’t know what you’re doing.
If you do want to dive into the deep end of speed optimization, the Backlinko SEO Page Speed Guide is a good place to start. If you want to get your hands dirty using your own website us webpagetest.org, Google Lighthouse, or the Google Page Speed tool.
Send your suggestions for Growth Marketing resources to .