Batch file processing on server

Our company is engaged in the development, support and maintenance of sites of any complexity. From simple one-page sites to large-scale cluster systems built on micro services. Experience of developers is confirmed by certificates from vendors.
Development and maintenance of all types of websites:
Informational websites or web applications
Business card websites, landing pages, corporate websites, online catalogs, quizzes, promo websites, blogs, news resources, informational portals, forums, aggregators
E-commerce websites or web applications
Online stores, B2B portals, marketplaces, online exchanges, cashback websites, exchanges, dropshipping platforms, product parsers
Business process management web applications
CRM systems, ERP systems, corporate portals, production management systems, information parsers
Electronic service websites or web applications
Classified ads platforms, online schools, online cinemas, website builders, portals for electronic services, video hosting platforms, thematic portals

These are just some of the technical types of websites we work with, and each of them can have its own specific features and functionality, as well as be customized to meet the specific needs and goals of the client.

Showing 1 of 1 servicesAll 2065 services
Batch file processing on server
Complex
~3-5 business days
FAQ
Our competencies:
Development stages
Latest works
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822
  • image_crm_chasseurs_493_0.webp
    CRM development for Chasseurs
    847
  • image_website-sbh_0.png
    Website development for SBH Partners
    999
  • image_website-_0.png
    Website development for Red Pear
    451

Implementing Batch File Processing (Batch Processing) on Server

Batch processing—when there are many tasks, they're homogeneous, and need efficient execution without collapsing under load. Import 50,000 CSV rows, convert archive of 3,000 images, nightly sitemap regeneration—all batch tasks with different runtime, memory, error-handling requirements.

Key Batch Processing Problems

Memory. Loading entire CSV into array—path to OOM. Correct pattern—stream reading in chunks.

Partial errors. If 50 of 10,000 rows are invalid—stopping entire process wrong. Need logic: skip bad rows, log, continue.

Reproducibility. If process crashed on row 7,000—need ability to resume from stop, not restart.

Parallelism. Sequential processing 50,000 records at 100ms each = ~1.5 hours. Splitting into parallel Jobs reduces dramatically.

Architecture: Chunked + Parallel Jobs

Pattern "Batch → Chunks → Jobs":

File upload
     ↓
BatchImportJob (master task)
     ↓ splits into chunks
[ChunkJob 1] [ChunkJob 2] ... [ChunkJob N]  ← parallel
     ↓
BatchCompletedJob (results aggregation)

Laravel provides Bus::batch() for this pattern.

Implementation Example: CSV Import

namespace App\Services;

use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;
use Illuminate\Support\LazyCollection;

class CsvImportService
{
    private const CHUNK_SIZE = 500;

    public function startImport(string $filePath, int $importId): string
    {
        $jobs = [];

        // LazyCollection—stream reading without memory load
        LazyCollection::make(function () use ($filePath) {
            $handle = fopen($filePath, 'r');
            $header = fgetcsv($handle); // first row—headers

            while (($row = fgetcsv($handle)) !== false) {
                yield array_combine($header, $row);
            }

            fclose($handle);
        })
        ->chunk(self::CHUNK_SIZE)
        ->each(function ($chunk, $index) use (&$jobs, $importId) {
            $jobs[] = new ProcessCsvChunkJob(
                importId: $importId,
                chunkIndex: $index,
                rows: $chunk->values()->toArray()
            );
        });

        $batch = Bus::batch($jobs)
            ->name("csv-import-{$importId}")
            ->allowFailures() // continue on individual Job errors
            ->then(function (Batch $batch) use ($importId) {
                Import::find($importId)?->update(['status' => 'completed']);
                ImportCompletedEvent::dispatch($importId);
            })
            ->catch(function (Batch $batch, \Throwable $e) use ($importId) {
                Import::find($importId)?->update([
                    'status'        => 'partially_failed',
                    'error_message' => $e->getMessage(),
                ]);
            })
            ->finally(function (Batch $batch) use ($importId) {
                $import = Import::find($importId);
                $import?->update([
                    'total_jobs'    => $batch->totalJobs,
                    'failed_jobs'   => $batch->failedJobs,
                    'finished_at'   => now(),
                ]);
            })
            ->onQueue('batch-processing')
            ->dispatch();

        Import::find($importId)?->update(['batch_id' => $batch->id]);

        return $batch->id;
    }
}

Chunk Processing Job

// app/Jobs/ProcessCsvChunkJob.php
class ProcessCsvChunkJob implements ShouldQueue
{
    use Batchable, Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries   = 3;
    public int $timeout = 120;
    public int $backoff = 10;

    public function __construct(
        private int   $importId,
        private int   $chunkIndex,
        private array $rows
    ) {}

    public function handle(): void
    {
        // If entire batch cancelled—stop
        if ($this->batch()?->cancelled()) {
            return;
        }

        $successCount = 0;
        $errors       = [];

        foreach ($this->rows as $lineNum => $row) {
            try {
                $this->processRow($row);
                $successCount++;
            } catch (\Throwable $e) {
                $errors[] = [
                    'chunk'  => $this->chunkIndex,
                    'line'   => $lineNum,
                    'data'   => array_slice($row, 0, 3), // first 3 fields for diagnostics
                    'error'  => $e->getMessage(),
                ];
            }
        }

        // Save chunk statistics
        ImportChunkResult::create([
            'import_id'     => $this->importId,
            'chunk_index'   => $this->chunkIndex,
            'processed'     => count($this->rows),
            'succeeded'     => $successCount,
            'failed'        => count($errors),
            'errors'        => $errors,
        ]);

        // Atomically update import counters
        Import::where('id', $this->importId)->increment('processed_rows', count($this->rows));
        Import::where('id', $this->importId)->increment('success_rows', $successCount);
    }

    private function processRow(array $row): void
    {
        // Validation and row saving here
        // Example:
        $validated = validator($row, [
            'email' => 'required|email',
            'name'  => 'required|string|max:255',
        ])->validate();

        User::updateOrCreate(
            ['email' => $validated['email']],
            ['name'  => $validated['name']]
        );
    }
}

Resuming Interrupted Batch

If server crashes mid-processing—Laravel Batch stores state in job_batches table. Completed chunks don't re-run. Incomplete—resume automatically on worker restart.

Forced restart of incomplete batch:

$batch = Bus::findBatch($batchId);
if ($batch && !$batch->finished()) {
    // Recreate incomplete jobs
    $pendingChunks = ImportChunkResult::where('import_id', $importId)
        ->pluck('chunk_index');

    // Logic to determine unprocessed chunks and redispatch
}

Real-Time Progress

// app/Http/Controllers/ImportController.php
public function progress(int $importId): JsonResponse
{
    $import = Import::findOrFail($importId);
    $batch  = $import->batch_id ? Bus::findBatch($import->batch_id) : null;

    return response()->json([
        'status'          => $import->status,
        'processed_rows'  => $import->processed_rows,
        'success_rows'    => $import->success_rows,
        'total_rows'      => $import->total_rows,
        'percentage'      => $import->total_rows > 0
            ? round($import->processed_rows / $import->total_rows * 100, 1)
            : 0,
        'batch' => $batch ? [
            'total_jobs'      => $batch->totalJobs,
            'pending_jobs'    => $batch->pendingJobs,
            'failed_jobs'     => $batch->failedJobs,
            'progress'        => $batch->progress(),
        ] : null,
    ]);
}

Load Limiting

Batch queue needs separate worker pool with limited parallelism to avoid overwhelming DB or CPU:

[program:batch-worker]
command=php artisan queue:work --queue=batch-processing --max-jobs=50 --sleep=3 --timeout=120
numprocs=4
autostart=true
autorestart=true

numprocs=4—four workers, each processes chunks sequentially. --max-jobs=50—after 50 tasks worker restarts, freeing memory.

Processing Multiple File Formats

Same pattern works for images, JSON, XLSX. For XLSX use PhpSpreadsheet in stream mode:

use PhpOffice\PhpSpreadsheet\Reader\Xlsx;

$reader = new Xlsx();
$reader->setReadDataOnly(true);
$spreadsheet = $reader->load($filePath);

$worksheet = $spreadsheet->getActiveSheet();
$highestRow = $worksheet->getHighestDataRow();

// Read in chunks of 500 rows
for ($startRow = 2; $startRow <= $highestRow; $startRow += 500) {
    $endRow = min($startRow + 499, $highestRow);
    $rows   = $worksheet->rangeToArray("A{$startRow}:Z{$endRow}");
    // dispatch Job for chunk
}

Timeline

Basic CSV import with chunks and progress—1 work day. Add resumption, detailed error log, progress endpoint—another 6–8 hours. Support XLSX and JSON formats—plus 4–6 hours.