Robots.txt Setup

Our company is engaged in the development, support and maintenance of sites of any complexity. From simple one-page sites to large-scale cluster systems built on micro services. Experience of developers is confirmed by certificates from vendors.
Development and maintenance of all types of websites:
Informational websites or web applications
Business card websites, landing pages, corporate websites, online catalogs, quizzes, promo websites, blogs, news resources, informational portals, forums, aggregators
E-commerce websites or web applications
Online stores, B2B portals, marketplaces, online exchanges, cashback websites, exchanges, dropshipping platforms, product parsers
Business process management web applications
CRM systems, ERP systems, corporate portals, production management systems, information parsers
Electronic service websites or web applications
Classified ads platforms, online schools, online cinemas, website builders, portals for electronic services, video hosting platforms, thematic portals

These are just some of the technical types of websites we work with, and each of them can have its own specific features and functionality, as well as be customized to meet the specific needs and goals of the client.

Our competencies:
Development stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    847
  • image_website-sbh_0.png
    Website development for SBH Partners
    999
  • image_website-_0.png
    Website development for Red Pear
    451

Setting up robots.txt for your website

robots.txt controls access for search engine bots to website pages. Proper configuration prevents indexing of technical pages, duplicates, and private sections.

Basic structure

User-agent: *
Disallow: /admin/
Disallow: /area51/
Disallow: /api/
Disallow: /cart/
Disallow: /checkout/
Disallow: /account/
Disallow: /search?
Disallow: /*?sort=
Disallow: /*?page=
Allow: /

Sitemap: https://example.ru/sitemap.xml

What to block

Required:

  • Administration panels (/admin/, /wp-admin/)
  • API endpoints (/api/)
  • Cart, checkout, account pages
  • Search results pages
  • Technical pages (login, register, password-reset)

Recommended:

  • URLs with filtering and sorting parameters (duplicate content)
  • Pagination pages (or allow them if no canonical)
  • /print/, /pdf/ versions of pages

Do not block:

  • CSS and JS files — Google must see them for rendering
  • Images (if you want Google Images indexing)

Directives for Yandex

Yandex supports extended syntax:

User-agent: Yandex
Disallow: /search?
Disallow: /*?utm_
Clean-param: utm_source&utm_medium&utm_campaign&utm_content&utm_term

Clean-param tells Yandex which GET parameters don't create unique content — prevents duplicate appearance.

Dynamic robots.txt in Laravel

Route::get('/robots.txt', function () {
    $content = view('robots')->render();
    return response($content, 200, ['Content-Type' => 'text/plain']);
});
User-agent: *
@if (app()->environment('production'))
Disallow: /admin/
Disallow: /api/
Allow: /
Sitemap: {{ url('/sitemap.xml') }}
@else
Disallow: /
@endif

On staging/dev environments block everything — to prevent search engines from indexing the test site.

Verification

  • Google Search Console → robots.txt Tester Tool
  • curl https://example.ru/robots.txt — verify the file is served correctly
  • Ensure the file is strictly in the domain root (not /en/robots.txt)

Setup time: a few hours.