I was running all my containers on AMD64 shapes because that's what I'd always done. x86, Intel/AMD, the default. Then I looked at my OCI bill and realized I was paying $0.064/OCPU/hr for AMD64 when ARM shapes cost $0.010/OCPU/hr. Six times cheaper for the same work.
The catch? My Docker images were all built for AMD64. They wouldn't run on ARM nodes. I had to figure out multi-arch builds.
It took me an afternoon to get right, and now every image I build supports both architectures. Here's what I learned.
Why ARM on OCI Is Different From ARM Everywhere Else
AWS has Graviton. GCP has Tau T2A. Azure has Ampere Altra. They're all ARM, and they're all cheaper than their x86 equivalents.
But OCI's pricing gap is the widest I've seen:
| Architecture | Shape | $/OCPU/hr | 4 OCPU + 24GB monthly |
|---|---|---|---|
| ARM | VM.Standard.A1.Flex | $0.010 | ~$29 |
| AMD64 | VM.Standard.E4.Flex | $0.064 | ~$184 |
And the Always Free tier gives you 4 ARM OCPUs and 24GB RAM forever. There's nothing comparable on the x86 side.
The problem is that most Docker images on Docker Hub are x86 only, and if you've been building images without thinking about architecture, yours probably are too.
How I Actually Did the Multi-Arch Build
Docker Buildx makes this surprisingly painless. The first time I tried it I expected hours of yak-shaving. It took about 20 minutes.
# Create a builder that supports multiple platforms
docker buildx create --name multiarch --driver docker-container --use
# Build and push for both architectures
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t iad.ocir.io/mytenancy/myapp:v1.2.0 \
--push .
That's the core of it. Buildx uses QEMU emulation to build the ARM image on your x86 machine (or vice versa). The --push flag creates a manifest list in the registry that points to both architecture-specific images. When an ARM node pulls the image, it gets the ARM version automatically.
The Gotchas I Hit
QEMU is slow. Cross-compiling a Go binary via QEMU took 4x longer than native. For Go, I got around this by using Go's built-in cross-compilation:
FROM --platform=$BUILDPLATFORM golang:1.22-alpine AS builder
ARG TARGETARCH
WORKDIR /app
COPY . .
RUN GOARCH=$TARGETARCH CGO_ENABLED=0 go build -o server .
FROM alpine:3.20
COPY --from=builder /app/server /server
ENTRYPOINT ["/server"]
The --platform=$BUILDPLATFORM runs the build stage on your host architecture (fast), and GOARCH=$TARGETARCH tells Go to cross-compile for the target. No QEMU needed for the slow compilation step. Build went from 3 minutes back down to 40 seconds.
Python images need attention. Some pip packages have pre-built wheels for x86 but not ARM. When that happens, pip tries to compile from source inside the container, which needs gcc and build headers you might not have in your image. I hit this with numpy on an older version. Pinning to a version with ARM wheels fixed it.
Alpine vs Debian base images. Alpine uses musl libc, not glibc. Some binaries compiled for ARM + glibc won't work on Alpine. If you're getting weird segfaults on ARM, try switching to debian:bookworm-slim as your base and see if it goes away.
CI Pipeline for Multi-Arch
I have this in GitHub Actions. It builds for both architectures and pushes to OCIR:
name: Build Multi-Arch
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to OCIR
uses: docker/login-action@v3
with:
registry: iad.ocir.io
username: ${{ secrets.OCIR_USERNAME }}
password: ${{ secrets.OCIR_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
push: true
platforms: linux/amd64,linux/arm64
tags: iad.ocir.io/${{ secrets.OCIR_TENANCY }}/myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
The cache-from: type=gha uses GitHub's cache for layer caching, which makes subsequent builds much faster.
Deploying on OKE with Mixed Node Pools
On OKE I run two node pools — one ARM, one x86. Most workloads go to ARM because it's cheaper. GPU workloads obviously stay on x86 (NVIDIA doesn't make ARM GPUs for data centers... yet).
# ARM node pool (cheap, general workloads)
oci ce node-pool create \
--name arm-workers \
--node-shape VM.Standard.A1.Flex \
--node-shape-config '{"ocpus": 4, "memoryInGBs": 24}' \
...
# AMD64 node pool (GPU workloads, x86-only dependencies)
oci ce node-pool create \
--name x86-workers \
--node-shape VM.Standard.E4.Flex \
--node-shape-config '{"ocpus": 4, "memoryInGBs": 32}' \
...
Kubernetes handles the scheduling automatically. When a multi-arch image gets pulled, each node gets the right architecture variant. I didn't have to add any node selectors or affinity rules for most workloads — the manifest list takes care of it.
For workloads that must run on a specific architecture (like anything that needs NVIDIA GPUs), I use a node selector:
nodeSelector:
kubernetes.io/arch: amd64
The Actual Savings
I moved 6 microservices from x86 to ARM over two weeks. These are Go and Python services — nothing exotic. All of them worked on ARM without code changes. The Docker images needed rebuilding with Buildx, but the Dockerfiles didn't change.
Monthly compute cost went from ~$184 to ~$29 for the same 4 OCPU / 24GB configuration per service. Across 6 services, that's about $930/month saved. Not life-changing for a company, but for my side projects and dev environments? That's real money.
When ARM Doesn't Work
Not everything runs on ARM. In my experience:
- NVIDIA GPU workloads — x86 only for now
- Legacy binaries — anything compiled for x86 without source code
- Some Java native libraries — JNI libraries that ship x86 .so files only
- Electron / desktop tools — not relevant for server containers but worth mentioning
Everything else — Go, Python, Node, Rust, Java (pure), Ruby — works fine on ARM. The ecosystem has matured a lot in the last two years.
Try It
If you're on OCI and not using ARM shapes, you're leaving money on the table. Start with one service. Build it multi-arch with Buildx. Deploy it on an A1.Flex node. Compare the bill.
The Docker workflow doesn't change. docker build, docker push, kubectl apply. The only difference is adding --platform linux/amd64,linux/arm64 to your build command.
Pavan Madduri — Oracle ACE Associate, CNCF Golden Kubestronaut. GitHub | LinkedIn




