Download and check XML sitemaps using R'
Last updated
Was this helpful?
Last updated
Was this helpful?
It's not required to submit an XML sitemap to have a successful website but it's definitely an SEO nice to have.
Nevertheless, if you do submit one, it's best to make sure it's error-free and as you will see its is quite straightforward to extract URLs using R
This function will first search for XML sitemap url. It will first check the robots.txt file to see if an XML sitemap url is explicitly declared.
if not, the script will do some random guess (‘sitemap.xml’, ‘sitemap_index.xml’ , …) most of the time, it will find the XML sitemap url.
Then, the XML sitemap URL is fetched and the URLs extracted.
If it’s a classic XML sitemap, a data frame (a special kind of array) will be produced and returned.
If it’s an index XML sitemap, the process will get back from the start with every XML sitemap inside.
This will produce a data frame with all the information extracted.
Another interesting function allows you to crawl the sitemap URLs and verify if your web pages send proper 200 HTTP codes, using HEAD Requests (easier on the website server)
It will add a dedicated column with the HTTP code filled in. You can check data inside rstudio by using
to discover, at the time of writing that most of the XML sitemap URLs are actually redirects...
You might have noticed that in this XML sitemap with a "lastmod" field. This is an optional field that explicitly declares to Google last modification date. This allows theoretically Google to optimise website crawls.
It also allows us to understand how fresh is one's website content as we can plot it
Let's try to get a clearer picture by extracting years
If you prefer a % cumulative view:
It can take some time depending on the number of URLs. It took several hours for for example.
or if you prefer,
Like in the , it's quite easy to count HTTP codes
(I've got help from the library)