By default, on shared and reseller servers, if a robots.txt file does not exist in the document root (public_html) directory, the server automatically creates a new robots.txt file at midnight.
This article describes how to disable this behavior. You may want to do this, for example, if you do not need to specify how search engines and web crawlers index your site.
To prevent the server from automatically generating a robots.txt file in the document root directory, follow these steps:
On the top menu bar, click + File:
In the New File dialog box, in the New File Name text box, type robots.txt.ignore:
For more information about the robots.txt file, please vist http://www.robotstxt.org.
Subscribe to receive weekly cutting edge tips, strategies, and news you need to grow your web business.
No charge. Unsubscribe anytime.