Robots.txt Generator

Create a robots.txt file from scratch
or
Choose one of the suggested options
Create a robots.txt file from scratch
  • create from scratch
  • suggestions
Action
Path
Bot

Action

Path

Bot

Delete

Action

Path

Bot

Delete
Add another row
Copy all essential files
Copy all images
General suggestions
Allow everything
User-agent: *
Allow: /
Disallow a website to crawl
User-agent: *
Disallow: /
Allow everything for Google only
User-agent: *
Disallow: /
User-agent: Googlebot
Allow: /
Disallow everything for most commonly blocked bots
User-agent: AhrefsBot
Disallow: /
User-agent: SemrushBot
Disallow: /
User-agent: YandexBot
Disallow: /
User-agent: baiduspider
Disallow: /
Disallow for all Google bots, except Google
User-agent: Googlebot
Disallow: /
User-agent: APIs-Google
Disallow: /
User-agent: Mediapartners-Google
Disallow: /
User-agent: AdsBot-Google-Mobile
Disallow: /
User-agent: AdsBot-Google
Disallow: /
User-agent: Googlebot-Image
Disallow: /
User-agent: Googlebot-News
Disallow: /
User-agent: Googlebot-Video
Disallow: /
User-agent: Storebot-Google
Disallow: /
Allow for all Google bots
User-agent: Googlebot
Allow: /
User-agent: APIs-Google
Allow: /
User-agent: Mediapartners-Google
Allow: /
User-agent: AdsBot-Google-Mobile
Allow: /
User-agent: AdsBot-Google
Allow: /
User-agent: Googlebot-Image
Allow: /
User-agent: Googlebot-News
Allow: /
User-agent: Googlebot-Video
Allow: /
User-agent: Storebot-Google
Allow: /
Ready-made robots.txt file for CMS
Robots.txt for WordPress
User-agent: *
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-login.php
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: */trackback
Disallow: */*/trackback
Disallow: */*/feed/*/
Disallow: */feed
Disallow: /*?*
Disallow: /cgi-bin
Disallow: /*.php$
Disallow: /*.inc$
Disallow: /*.gz$
Allow: */uploads
Allow: /*.js
Allow: /*.css
Allow: /*.png
Allow: /*.jpg
Allow: /*.jpeg
Allow: /*.gif
Allow: /*.svg
Allow: /*.webp
Allow: /*.pdf
robots.txt for Joomla
User-agent: *
Disallow: /administrator/
Disallow: /bin/
Disallow: /cache/
Disallow: /cli/
Disallow: /components/
Disallow: /includes/
Disallow: /installation/
Disallow: /language/
Disallow: /layouts/
Disallow: /libraries/
Disallow: /logs/
Disallow: /modules/
Disallow: /plugins/
Disallow: /tmp/
robots.txt for MODX
User-agent: *
Disallow: /*?id=
Disallow: /assets
Disallow: /assets/cache
Disallow: /assets/components
Disallow: /assets/docs
Disallow: /assets/export
Disallow: /assets/import
Disallow: /assets/modules
Disallow: /assets/plugins
Disallow: /assets/snippets
Disallow: /connectors
Disallow: /core
Disallow: /index.php
Disallow: /install
Disallow: /manager
Disallow: /profile
Disallow: /search
robots.txt for Drupal
User-agent: *
Allow: /core/*.css$
Allow: /core/*.css?
Allow: /core/*.js$
Allow: /core/*.js?
Allow: /core/*.gif
Allow: /core/*.jpg
Allow: /core/*.jpeg
Allow: /core/*.png
Allow: /core/*.svg
Allow: /profiles/*.css$
Allow: /profiles/*.css?
Allow: /profiles/*.js$
Allow: /profiles/*.js?
Allow: /profiles/*.gif
Allow: /profiles/*.jpg
Allow: /profiles/*.jpeg
Allow: /profiles/*.png
Allow: /profiles/*.svg
Disallow: /core/
Disallow: /profiles/
Disallow: /README.txt
Disallow: /web.config
Disallow: /admin/
Disallow: /comment/reply/
Disallow: /filter/tips/
Disallow: /node/add/
Disallow: /search/
Disallow: /user/register/
Disallow: /user/password/
Disallow: /user/login/
Disallow: /user/logout/
Disallow: /index.php/admin/
Disallow: /index.php/comment/reply/
Disallow: /index.php/filter/tips/
Disallow: /index.php/node/add/
Disallow: /index.php/search/
Disallow: /index.php/user/password/
Disallow: /index.php/user/register/
Disallow: /index.php/user/login/
Disallow: /index.php/user/logout/
robots.txt for Magento
User-agent: *
Disallow: /index.php/
Disallow: /*?
Disallow: /checkout/
Disallow: /app/
Disallow: /lib/
Disallow: /*.php$
Disallow: /pkginfo/
Disallow: /report/
Disallow: /var/
Disallow: /catalog/
Disallow: /customer/
Disallow: /sendfriend/
Disallow: /review/
Disallow: /*SID=
robots.txt for OpenСart
User-agent: *
Disallow: /*route=account/
Disallow: /*route=affiliate/
Disallow: /*route=checkout/
Disallow: /*route=product/search
Disallow: /index.php?route=product/product*&manufacturer_id=
Disallow: /admin
Disallow: /catalog
Disallow: /system
Disallow: /*?sort=
Disallow: /*&sort=
Disallow: /*?order=
Disallow: /*&order=
Disallow: /*?limit=
Disallow: /*&limit=
Disallow: /*?filter_name=
Disallow: /*&filter_name=
Disallow: /*?filter_sub_category=
Disallow: /*&filter_sub_category=
Disallow: /*?filter_description=
Disallow: /*&filter_description=
Disallow: /*?tracking=
Disallow: /*&tracking=
Disallow: /*compare-products
Disallow: /*search
Disallow: /*cart
Disallow: /*checkout
Disallow: /*login
Disallow: /*logout
Disallow: /*vouchers
Disallow: /*wishlist
Disallow: /*my-account
Disallow: /*order-history
Disallow: /*newsletter
Disallow: /*return-add
Disallow: /*forgot-password
Disallow: /*downloads
Disallow: /*returns
Disallow: /*transactions
Disallow: /*create-account
Disallow: /*recurring
Disallow: /*address-book
Disallow: /*reward-points
Disallow: /*affiliate-forgot-password
Disallow: /*create-affiliate-account
Disallow: /*affiliate-login
Disallow: /*affiliates
Disallow: /*?filter_tag=
Disallow: /*brands
Disallow: /*specials
Disallow: /*simpleregister
Disallow: /*simplecheckout
Disallow: *utm=
Allow: /catalog/view/javascript/
Allow: /catalog/view/theme/*/
robots.txt for WooСommerce
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-json/
Disallow: /*add-to-cart=*
Disallow: /cart/
Disallow: /checkout/
Disallow: /my-account/
Your sitemap file
Generate robots.txt
Reset
Your robots.txt file
Click to copy
Download robots.txt file

Check your robots.txt, sitemap.xml and other crawling issues

See detailed and easy-to-follow tips

How to read a robots.txt file?

User-agent
Allow
Disallow
Sitemap
User-agent
This directive defines the crawlers for which the recommendations are prescribed in the file. Each search engine has its own crawlers. Here, you need to select the bot (or bots) that is disallowed or allowed to crawl your website’s pages and files.
Allow
With the Allow directive, you need to determine which files or pages are accessible to crawlers. It is used to counteract the Disallow directive which tells search engines that they can access a specific file or page within a directory that's otherwise disallowed.
Disallow
With this directive, you can tell search engines not to access certain files, pages, or sections of your website that you don't want to be indexed—for example, search and login pages, duplicates, service pages, etc.
Sitemap
It's an optional directive intended to notify search engines about the availability of a sitemap in an XML format. From the Sitemap, the crawlers get additional information about the essential pages of your website and its structure. If necessary, you can add the sitemap URL to this tool.
How to use our Robots.txt Generator?

How to use our Robots.txt Generator?

How to use our Robots.txt Generator?

We developed this robots.txt file generator to help webmasters, SEO experts, and marketers quickly and easily create robots.txt files. You can generate a robots.txt file from scratch or use ready-made suggestions. In the former case, you customize the file by setting up directives (allow or disallow crawling), the path (specific pages and files), and the bots that should follow the directives. Or you can choose a ready-made robots.txt template containing a set of the most common general and CMS directives. You may also add a sitemap to the file. As a result, you'll get a ready-made robots.txt which you can edit and then copy or download.

Robots.txt syntax

The robots.txt syntax consists of directives, parameters, and special characters. If you want the file to work properly, you should comply with specific content requirements when creating a robots.txt file:

1. Each directive must begin on a new line. There can only be one parameter per line.

User-agent: * Disallow: /folder1/ Disallow: /folder2/
User-agent: *
Disallow: /folder1/
Disallow: /folder2/

2. Robots.txt is case-sensitive. For example, if a website folder name is capitalized, but it’s lowercase in the robots.txt file, it can disorient crawlers.

User-agent: Disallow: /folder/
Disallow: /Folder/

3. You cannot use quotation marks, spaces at the beginning of lines, or semicolons after them.

Disallow: /folder1/;
Disallow: /“folder2”/
Disallow: /folder1/
Disallow: /folder2/

Show more
How to use the Disallow directive properly?

Once you have filled the User-agent directive in, you should specify the behavior of certain (or all) bots by adding crawl instructions. Here are some essential tips:

1. Don't leave the Disallow directive without a value. In this case, the bot will crawl all of the site's content.

Disallow: - allow to crawl the entire website

2. Do not list every file that you want to block from crawling. Just disallow access to a folder, and all files in it will be blocked from crawling and indexing.

Disallow: /folder/

3. Don't block access to the website with this directive:

Disallow: / - block access to the whole website

Otherwise, the site can be completely removed from the search results.

Besides that, make sure that essential website pages are not blocked from crawling: the home page, landing pages, product cards, etc. With this directive, you should only specify files and pages that should not appear on the SERPs.

Show more
Adding your Sitemap to the robots.txt file

If necessary, you can add your Sitemap to the robots.txt file. This makes it easier for bots to crawl website content. The Sitemap file is located at http://yourwebsite/sitemap.xml. You need to add a directive with the URL of your Sitemap as shown below:

1. Don't leave the Disallow directive without a value. In this case, the bot will crawl all of the site's content.

User-agent: *
Disallow: /folder1/
Allow: /image1/
Sitemap: https://your-site.com/sitemap.xml

Show more
How to submit a robots.txt file to search engines?

You don't need to submit a robots.txt file to search engines. Whenever crawlers come to a site before crawling it, they start looking for a robots.txt file. And if they find one, they will read that file first before scanning your site.

At the same time, if you've made any changes to the robots.txt file and want to notify Google, you can submit your robots.txt file to Google Search Console. Use the Robots.txt Tester to paste the text file and click Submit.

Show more
How to define the User-agent?

When creating robots.txt and configuring crawling rules, you should specify the name of the bot to which you're giving crawl instructions. You can do this with the help of the User-agent directive.

If you want to block or allow all crawlers from accessing some of your content, you can do this by indicating * (asterisk) as the User-agent:

User-agent: *

Or you might want all your pages to appear in a specific search engine, for example, Google. In this case, use Googlebot User-agent like this:

User-agent: Googlebot

Keep in mind that each search engine has its own bots, which may differ in name from the search engine (e.g., Yahoo's Slurp). Moreover, some search engines have many crawlers depending on the crawl targets. For example, in addition to its main crawler Googlebot, Google has other bots:

  • Googlebot News—crawls news;
  • Google Mobile—crawls mobile pages;
  • Googlebot Video—crawls videos;
  • Googlebot Images—crawls images;
  • Google AdSense—crawls websites to determine content and provide relevant ads.

Show more
How to use the Allow directive properly?

The Allow directive is used to counteract the Disallow directive. Using the Allow and Disallow directives together, you can tell search engines that they can access a specific folder, file, or page within an otherwise disallowed directory.

Disallow: /album/ - search engines are not allowed to access the /album/ directory

Allow: /album/picture1.jpg - but they are allowed to access the file picture1 of the /album/ directory

With this directive, you should also specify essential website files: scripts, styles, and images. For example:

Allow: */uploads
Allow: /wp-/*.js
Allow: /wp-/*.css
Allow: /wp-/*.png
Allow: /wp-/*.jpg
Allow: /wp-/*.jpeg
Allow: /wp-/*.gif
Allow: /wp-/*.svg
Allow: /wp-/*.webp
Allow: /wp-/*.pdf

Show more
How to add the generated robots.txt file to your website?

Search engines and other crawling bots look for a robots.txt file whenever they come to a website. But, they'll only look for that file in one specific place—the main directory. So, after generating the robots.txt file, you should add it to the root folder of your website. You can find it at https://your-site.com/robots.txt.

The method of adding a robots.txt file depends on the server and CMS you are using. If you can't access the root directory, contact your web hosting provider.

Show more
Check your robots.txt, sitemap.xml and other crawling issues
Show more

FAQ

What does a robots.txt file do?

The robots.txt file tells search engines what pages to crawl and which bots have access to crawl the website’s content.

How important is a robots.txt file?

We can solve two issues with robots.txt:

  • 1. Reduce the likelihood of certain pages being crawled, including indexing and appearing in the search results.
  • 2. Save crawling budget.

With the help of robots.txt, you can block information that has no value to the user and does not affect the site's ranking, as well as confidential data.

What is User-agent * in robots.txt?

User-agent * indicates that the guidelines provided in the robots.txt file will apply to all crawlers without any exception.

What does User-agent: * Disallow: / mean?

User-agent: * Disallow: / tells all bots that they are not allowed to access the website.

What should be blocked in the robots.txt file, and what should be allowed?

What content is most often restricted?

Pages that are not suitable for display in SERPs or content that users see upon getting acquainted with the resource: messages about a successful order, registration forms, etc.

Allow access to essential files for ranking purposes: scripts, styles, images.

Under what conditions will the robots.txt file work properly?

Robots.txt file will properly work under three conditions:

  • 1. Properly specified User-agent and directives. For example, each group begins with a User-agent line, one directive per line.
  • 2. The file must be in the .txt format only.
  • 3. The robots.txt file must be located in the root of the website host to which it applies.
Does order matter in robots.txt?

Yes, order does matter in the robots.txt file, but not for all elements. Each group should begin with еру User-agent line, then you need to specify еру Allow and Disallow directives (you can change the order of these fields), and then, when required, add the Sitemap. For example:

User-agent: Google
Disallow: /folder2/
Sitemap: https://seranking.com/sitemap.xml

Can I use regex in robots.txt?

To create robots.txt with more flexible instructions, you can use two regular expressions recognized by search engines:

  • * — used to specify any value variation
  • $ — used to specify that the URL path must end that way

For example:

User-agent: *
Disallow: /*page$

This means that you want to block all URLs that have page at the end.

How to block a crawler in robots.txt?

To do this, specify the bot in the User-agent directive, and then indicate the path to the file that is not allowed to crawl in the Disallow line. For example:

User-agent: Google
Disallow: /folder2/

How to block all search engines?

To block access to content on your website for all crawlers, use User-agent: * Disallow: / For example:

User-agent: * - blocks all bots
Disallow: /folder1/ - blocks the specific pages or folders
or
Disallow: / - blocks the entire website

How to use the robots.txt file?

Once the robots.txt file is created, add it to the website root directory. And then you should check whether everything works properly. You can do this by using our robots.txt tester.

How long does robots.txt take effect?

Google usually checks your robots.txt file every 24-36 hours.

Is robots.txt safe?

Robots.txt is publicly available. But it's not a security threat in itself. One option is to put the necessary files in a separate subdirectory and make it accessible for crawling. In this case, bots won't be able to navigate that directory.

More features to explore

  • Get immediate notifications when a change is made to a web page
  • Compare changes against two scanning dates
  • Understand the reasons behind rankings fluctuations
  • Analyze the target URL for the target keyword
  • Provides technical, content, link, and many other types of analysis
  • View such analyzed parameters as “passed”, “warnings” and “errors”
  • Use practical tips to solve issues and eliminate warnings
  • Add a customized widget to your website
  • Offer a free on-page SEO audit to your visitors
  • Collect info on visitors to convert them into clients
  • Analyze and group search queries that match the same website’s URL
  • Group keywords regardless of the location
  • Check the search volume right on spot
  • Easily group long-tail search queries
  • Analyze each link against 15 SEO parameters
  • Discover your and your competitors’ backlink profiles
  • Export backlinks to the Backlink Monitoring tool
  • Base your backlink profile structure on valuable SEO factors
  • Schedule automatic social media posting
  • Analyze key metrics
  • Facebook & Twitter management
  • Analyze social media user demographics
  • Customizable domain logo, header, footer, color scheme, and much more
  • Set automated or manually generated branded reports
  • Custom access to different SEO facilities
  • Use your own domain via separate personal access
  • Complete checklist with directions and tips
  • Regularly updated material
  • Custom tasks can be added
  • Track your progress
  • Discover competitors by keywords, domains, subdomains, and URLs
  • Analyze the Ads advertising history
  • Get a list of all your competitors in the search results
  • Get ideas for alternative keywords for your ad campaign and organic promotion
  • Analyze all of your pages against key SEO parameters
  • Compare previous crawls
  • Solve issues by following our guidelines
  • Study analyzed parameters categorized as “passed”, “warnings” and “errors”
1
/