The Crawl-delay directive

If the server is overloaded and fails to timely process the robot's requests, use the Crawl-delay directive. You can specify the minimum interval (in seconds) for the search robot to wait after loading one page, before starting to load another.

Before changing the crawl rate for your site, find out to what pages the robot makes requests more often.

  • Analyze the server logs. Contact the person responsible for the site or the hosting provider.
  • View the list of URLs on the Indexing → Crawl statistics page in Yandex.Webmaster (set the option to All pages).

If you find that the robot accesses service pages, prohibit their indexing in the robots.txt file using the Disallow directive. This will help reduce the number of unnecessary robot requests.

How to correctly specify the Crawl-delay directive

To maintain compatibility with robots that may deviate from the standard when processing robots.txt, add the Crawl-delay directive to the group that starts with the User-Agent entry right after the Disallow and Allow directives.

The Yandex search robot supports fractional values for Crawl-Delay, such as "0.1". This doesn't mean that the search robot will access your site 10 times a second, but it may speed up the site crawling.

These instructions are not taken into account by the robot that crawls the RSS feed created to generate Turbo pages.

Note. The maximum allowed value in the directive for Yandex is 2.0. You can set the desired speed at which the robot will load your site pages in the Site crawl rate section of Yandex.Webmaster.

Examples:

User-agent: Yandex
Crawl-delay: 2.0 # sets a 2-second timeout

User-agent: *
Disallow: /search
Crawl-delay: 1.5 # set a 1.5 second timeout