A script that works on a laptop can fall apart in production fast: browser startup issues, oversized images, overlapping runs, flaky retries, and JavaScript-heavy pages behaving differently under automation.
That is why I like this setup:
- Playwright for automation
- Bright Data Browser API for remote browser execution
- Kubernetes Jobs/CronJobs for repeatable batch runs
The key shift is simple:
stop treating scraping like a script, and start treating it like a worker.
Remote browsers + Kubernetes make the pipeline cleaner, smaller, and much easier to operate at scale.










![[Tanwydd]](https://media2.dev.to/dynamic/image/width=640,height=640,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3874697%2Faf7553e1-11dc-4b40-b171-7d67491de89f.png)



