Technical SEO basics
What is a URL?
URL stands for Uniform Resource Locator, which probably doesn’t really help you in understanding what it is. Basically, a URL is the ‘address’ of a property on the World Wide Web. Usually this means a webpage, such as this URL for our SEO services page: http://www.upliftdigital.co.uk/seo-services/, but other properties such as images have a URL address.
What is a domain name?
Everything on the internet has an IP address (Internet Protocol), which is usually an unwieldy set of letters and numbers which is not particularly memorable. The computer you use to access the internet has an IP address, as does the server that hosts this website and the website itself. The domain name is simply a memorable name assigned to the IP address of webpages and servers. In the case of this website, the domain name is upliftdigital.co.uk.
What is a sub-domain?
In simple terms, this is a ‘sub-division’ of a domain which stands alone as a separate entity but is still strongly associated with the primary domain e.g. west.example.com and east.example.com are both sub-domains of the example.com domain.
What is http?
http stands for Hypertext Transfer Protocol and is the foundation of data communication for the World Wide Web. You will notice that all URLs begin with http:// as this designates the rules for the data transfer between servers when a URL is requested.
What is https?
https is simply the more ‘secure’ version of http, and is used on websites to encrypt data as it gets transferred. This is particularly important for websites that handle personal data, process payments and store passwords. Back in 2014 Google launched a campaign for webmasters to adopt https, and made it a ranking factor.
What is Schema?
In SEO terms, Schema.org mark-up is a specific vocabulary of code that is used to better describe certain elements and entities on a webpage. This ‘structured data’ gives search engines a better understanding of the content on a page, as well as making it quicker and easier to crawl and index the information. Visit Schema.org for more information.
You will probably be most familiar with pagination on large ecommerce websites that have a lot of products in a particular category. Rather than list all the products on one page, the page can be divided into different pages on the same URL but with the pagination parameter added e.g. www.shop.com/products/3 for the third page of products.
What is robots.txt?
The Robots.txt file sits on your server and contains instructions for search engines and other crawling robots or ‘bots’. These instructions are usually focused on telling search engines which pages they should and shouldn’t crawl/index, and also blocking certain bots from crawling any of the website.
What are breadcrumbs?
In terms of websites, ‘breadcrumbs’ are a form of internal linking which provide a clear path for the user telling them whereabouts they are within the overall site structure. Usually breadcrumbs are positioned towards the top of the page and are a text-based navigational aid and will look something like this: Home >> Internal Page >> Internal Page 2 >> This page
What is a canonical tag?
It is not uncommon for websites to have pages on them that are very similar. A lot of ecommerce sites, for instance, may have individual pages for products that are very similar e.g. different colour. This is a form of duplicate content, which can lead to penalisation from Google as it is a sign of a poor user experience. The rel=canonical tag can be used to mark-up duplicate pages without redirecting them, so Google considers just one of the pages as the ‘canonical’ version and disregards the others when it comes to indexing.