Cursor Rules for Docker: 6 Rules That Make AI Write Production-Ready Dockerfiles
If you use Cursor or Claude Code and ask for a Dockerfile, you've seen the results. latest tags everywhere. Root users running your app. Secrets baked into layers. Multi-gigabyte images because the AI installed build tools in the runtime stage. apt-get without cleanup. No .dockerignore.
The fix isn't better prompting. It's better rules.
Here are 6 cursor rules for Docker that make your AI assistant write Dockerfiles that are secure, small, and production-ready. Each one includes a before/after example so you can see exactly what changes.
1. Enforce Multi-Stage Builds β Ban Single-Stage Production Images
Without this rule, AI installs compilers, dev dependencies, and build tools in the same image your app runs in. Your 80MB Go binary ships inside a 1.2GB image with gcc, make, and the entire Go toolchain.
The rule:
Always use multi-stage Docker builds. Build stage installs dependencies
and compiles. Final stage copies only the built artifact into a minimal
base image (alpine, distroless, or scratch). Never install build tools
in the runtime stage.
Bad β everything in one stage:
FROM node:20
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
Good β multi-stage with minimal runtime:
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci --ignore-scripts
COPY . .
RUN npm run build
FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
COPY package*.json ./
EXPOSE 3000
USER node
CMD ["node", "dist/index.js"]
The build stage has everything npm needs. The runtime stage has only the compiled output and production dependencies. Image size drops from 1GB+ to under 200MB.
2. Enforce Non-Root User β Ban Running as Root
AI-generated Dockerfiles never include a USER directive. Your container runs as root, which means a container escape gives the attacker root on the host.
The rule:
Never run containers as root. Create a non-root user in the Dockerfile
or use the built-in non-root user for the base image (e.g., node user
for node images). Set USER before CMD. Set file ownership explicitly.
Bad β running as root (the default):
FROM python:3.12-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
Good β dedicated non-root user:
FROM python:3.12-slim
RUN groupadd --gid 1001 appgroup && \
useradd --uid 1001 --gid appgroup --shell /bin/false appuser
WORKDIR /app
COPY --chown=appuser:appgroup . .
RUN pip install --no-cache-dir -r requirements.txt
USER appuser
CMD ["python", "main.py"]
The app runs as appuser with no shell access. Even if an attacker exploits the application, they can't escalate to root inside the container.
3. Pin Image Tags β Ban :latest
Without this rule, AI writes FROM python:latest or FROM node:20. Your builds break silently when the upstream image changes. Tuesday's build works, Wednesday's doesn't, and nothing in your code changed.
The rule:
Always pin base image tags to a specific version including the variant
(e.g., node:20.11-alpine, python:3.12.2-slim). Never use :latest.
For maximum reproducibility, pin to the SHA256 digest in production.
Bad β unpinned tags:
FROM node:latest
FROM python:3
FROM ubuntu
Good β pinned versions:
FROM node:20.11-alpine3.19
FROM python:3.12.2-slim-bookworm
FROM ubuntu:24.04
Pinned tags mean the same Dockerfile builds the same image today and next month. Version bumps are explicit and reviewable in your git history.
4. Optimize Layer Caching β Ban COPY . Before Install
AI puts COPY . . before RUN npm install. Every code change invalidates the dependency cache. A one-line fix triggers a full dependency reinstall that takes 3 minutes.
The rule:
Copy dependency manifests (package.json, requirements.txt, go.mod) first
and install dependencies in a separate layer. Copy source code after.
Use .dockerignore to exclude node_modules, .git, and build artifacts.
Combine related RUN commands with && to reduce layers.
Bad β cache-busting layer order:
FROM python:3.12-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
Good β dependency layer cached separately:
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "main.py"]
Now changing main.py doesn't reinstall dependencies. The pip install layer is cached until requirements.txt changes. Build time drops from minutes to seconds.
5. Never Bake Secrets Into Images β Ban ENV for Credentials
AI puts API keys and database passwords in ENV directives or ARG variables that end up in the image layers. Anyone who pulls your image can inspect the layers and extract every secret.
The rule:
Never put secrets in ENV, ARG, or COPY directives in Dockerfiles.
Use runtime environment variables, Docker secrets, or mounted config files.
Use --mount=type=secret for build-time secrets that must not persist in layers.
Never hardcode credentials, tokens, or connection strings.
Bad β secrets baked into the image:
FROM node:20-alpine
ENV DATABASE_URL=postgresql://admin:s3cret@db:5432/prod
ENV STRIPE_KEY=sk_live_abc123
COPY . .
RUN npm install
CMD ["node", "index.js"]
Good β secrets injected at runtime:
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN --mount=type=secret,id=npmrc,target=/app/.npmrc npm ci
COPY . .
RUN npm run build
FROM node:20-alpine
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
USER node
CMD ["node", "dist/index.js"]
# Secrets passed at runtime: docker run -e DATABASE_URL=... -e STRIPE_KEY=...
No secrets in any layer. Build-time secrets (like private registry tokens) use --mount=type=secret which never persists. Runtime secrets come from the orchestrator.
6. Add Health Checks β Ban Images Without HEALTHCHECK
Without this rule, Docker has no idea if your app is actually healthy. The container shows "running" even when the process is deadlocked, out of memory, or returning 500s on every request.
The rule:
Every production Dockerfile must include a HEALTHCHECK instruction.
Use curl, wget, or a custom binary to check the app's health endpoint.
Set appropriate interval, timeout, and retries. Install only the minimal
tool needed for the check (wget is smaller than curl on Alpine).
Bad β no health check:
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm ci
EXPOSE 3000
CMD ["node", "index.js"]
Good β health check with proper timing:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --ignore-scripts
COPY . .
EXPOSE 3000
USER node
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "index.js"]
Docker now knows when your app is unhealthy. Orchestrators like Kubernetes and Docker Swarm use this to restart failed containers automatically. Load balancers stop routing traffic to unhealthy instances.
Put These Rules to Work
These 6 rules cover the patterns where AI coding assistants fail most often with Docker. Add them to your .cursorrules or CLAUDE.md and the difference is immediate β smaller images, no root containers, no leaked secrets, and builds that actually cache.
I've packaged these rules (plus 44 more covering Kubernetes, CI/CD, and infrastructure patterns) into a ready-to-use rules pack: Cursor Rules Pack v2
Drop it into your project directory and stop fighting your AI assistant.













