So SEO (Search Engine Optimization) is a huge part of my work even after building websites and past projects. One of my past jobs I had the task to make sure that each website would get seen and spidered by many of the major SERPs as fast and as soon as possible and to rank just as high within a short amount of time and as organic as possible.
First let me start by saying that MOZ.com is the BEST place to start learning about SEO and about getting higher click-thru-rates and rankings. They are the top source about the ever changing landscape as well because it often changes pretty fast. https://moz.com/beginners-guide-to-seo
THE TASK LIST
Ensure a page’s TITLE, META DESCRIPTION, META KEYWORDS are as descriptive as possible, web101 practice here.
META “keywords” these days are quite outdated but seems to still be in use. Note that there is a limit of 10-20 words, so make them count! (There are website tools out there that you can look through their database and see what their “keyword weighting” values are. This tool can help determine if certain words or phrases are unique enough for high visibility based on historic searches to use. )
Ensure attributes were always filled in for links and images as descriptive as possible –
- <a href> tags must have the TITLE attribute filled in
- <img src> tags must have ALT attributes filled in
LINKS
Make your URLs are descriptive, short, keyword rich when possible, and avoid non-letter characters. Search engines require unique URLs for each page on a website so they can display pages in search results, but clear URL structure and naming is also helpful for people who are trying to understand what a specific URL is about.
Next step is to set the “canonical” to exactactly one URL. As some content management systems will generate alternative URLs and to SERPs that can often look like duplicate content and can bring down your page ranking.
Last item, for all links that go outbound they should always bounce out to a new page so it does not take the user off the current page entirely so the user can return back.
OPEN GRAPH PROTOCOL
docs: https://ogp.me
So there are these social signal meta tags that should be applied to web pages so that the API call from other social platforms can populate this information to render on their platforms. These are common OG tags and many more can be found listed in the docs depending on the content your are producing.
- <meta property=”og:title” content=””>
- <meta property=”og:description” content=””>
- <meta property=”og:image” content=””>
…generally you should use the same info from the page META tags but can absolutely be different content. It will pull and display this info on all the social platforms such as Facebook, Twitter, LinkedIn, etc. However Twitter has more granularity and has its own specific subset of META PROPERTY tags.
docs: https://developer.twitter.com/en/docs/twitter-for-websites/cards/guides/getting-started
- <meta name=”twitter:title” content=””>
- <meta name=”twitter:image” content=””>
- <meta name=”twitter:description” content=””>
- <meta name=”twitter:creator” content=””>
- <meta name=”twitter:card” content=””>
Generally the twitter:card size is listed in different dimensions from the standard og:image so you might find yourself making a second graphic asset so this renders properly in the display view.
You may opt to check that the pages are rending the social data accordingly by using LINTERS as often times the information gets obfuscated and need a way to debug it. These tools work best from the social networks listed –
SITE ARCHITECTURE
robots.txt
docs: https://developers.google.com/search/docs/crawling-indexing/robots/intro
Basically this is a txt file that sits at the root of the website and tells the search engine to crawl a set of URLs and also manages crawl traffic. It is created to signify the search engines which pages to crawl and which pages to omit based on ‘allow’ and ‘disallow’ commands.
sitemap.xml
docs: https://moz.com/blog/xml-sitemaps
Next up would be to generate a sitemap XML document in the root directory so that Google can figure out how-to index all the pages in the website and understand the website structure. It’s more efficient for huge SERPs to crawl. It simply contains a list of URLs and show the bots how many useful and rank-able URLs there are in a website and what the crawl time delay that is allocated base on the priority.
WEBSITE TRACKING
There are 2 tools that you need to register the XML documents on so the majors SERPs can begin scheduling an indexing of the website. The tools also review the document for errors before submitting. You can also acquire additional information about what keywords are being used to find your site.
- Google Webmaster Tools (GWT)
https://search.google.com/search-console - Bing Webmaster Tools (BWT)
https://www.bing.com/webmasters
Last steps are to add tracking code snippets. These tools will help to track and generate reports for where the users are coming from. Including the users journey throughout the website which can then help product owners gain additional insight on how they can further improve site navigation and traffic.
- Google Analytics
https://analytics.google.com/analytics/ - HotJar
https://www.hotjar.com/ - Microsoft Clarity
https://clarity.microsoft.com/projects/
You will be able to measure all sorts of things from engagement metrics, conversion rates, time on page, pages per visit, bounce rate, scroll depth, search traffic and soooo much more!
LINK BUILDING
Backlinks also help to generate qualified traffic to a site. It also establishes authority with your content, so getting linked from multiple sources from high quality content helps with ranking. As you become seen more as a trustworthy site.
SEO TECHNICAL ASSESSMENTS
You may want to read a past post which goes more in-depth “SEO Technical Site Assessment (Example)” which goes over how to evaluate existing large project sites and how to collect and analyze that information.
A search engine does not see a webpage like our human eyes do. Visuals are not as special (except for their descriptive ALT text) when the SERPs are parsing through the page. So strip away the images, media files and gorgeous cascading style sheets and your left basically with TEXT. So as you can see your document keywords count and this can go into an entirely deeper conversation on this topic. The more repetitive the keywords are in the context of the content is key to getting picked up and ranked higher with SERPs. Just don’t “keyword stuff” as the Google algorithm can clearly detect this and demote the page. There are so many technical and analytical things to hit to ensure that you ride the SERP algorithmic wave in hope of getting indexed and ranked well.
Google Analytics
By Justin Cutroni
ISBN-13: 978-0596158002
I also suggest utilizing this book by O’Reilly in addition to MOZ website to learn as much as possible about SEO and to implement their recommendations!
RELATED POST:
SEO Technical Site Assessment (Example)