Optimizing AWS CodeBuild for Faster Docker Builds
When your AWS CodeBuild jobs take too long, the first instinct is often to tweak the Dockerfile or upgrade instance types. But one of the most overlooked culprits is provisioning time. If your builds are spending an excessive amount of time just spinning up, the problem might not be your Docker build at all—it could be the runner you’ve chosen.
Provisioning Delays: Choosing the Right Runner
AWS maintains a pool of pre-warmed runners to reduce cold-start delays. However, if you’re using an older or less common runner type, AWS might not have one readily available, leading to longer provisioning times. This means your build sits idle, waiting for an environment to be provisioned before the actual work begins.
If you’re experiencing unusually high provisioning times:
- Check which instance type you’re using for CodeBuild and switch to a more commonly used runner.
- Avoid legacy instance types that AWS keeps in low availability.
- Experiment with faster warm-up options like ARM-based runners if your workload supports it.
VPC: A Bottleneck You Might Not Need
Another silent killer of CodeBuild performance is running inside a VPC. If your build doesn’t strictly require VPC access (for instance, to reach internal resources like RDS or private S3 buckets), then moving it outside the VPC can remove unnecessary latency caused by network initialization.
When running CodeBuild inside a VPC, AWS must:
- Attach an Elastic Network Interface (ENI) to your runner.
- Wait for network provisioning to complete before executing the build.
- Route traffic through NAT gateways if accessing the internet.
If your Docker builds don’t need VPC connectivity, disabling this option can significantly improve start-up time.
Docker Multi-Stage Builds: The Real Game Changer
Once you’ve optimized provisioning, the next major gain comes from how you structure your Dockerfile. A multi-stage build allows you to separate dependencies from the final build, avoiding redundant installations every time.
Instead of installing all dependencies fresh in each build, create a base image with everything pre-installed:
# Base image with dependencies FROM php:8.1-cli as base RUN apt-get update && apt-get install -y \ libzip-dev \ unzip \ && docker-php-ext-install zip # Final build stage FROM base AS final COPY . /app WORKDIR /app CMD ["php", "index.php"]
This approach ensures that your CodeBuild jobs only rebuild layers that change, instead of wasting time reinstalling system dependencies on every run.
Leveraging Docker Cache for Faster Builds
A well-structured multi-stage build is powerful, but you also need caching to make the most of it. Using --cache-from
allows Docker to reuse previously built layers instead of starting from scratch.
Modify your CodeBuild buildspec.yml to include Docker layer caching:
phases: pre_build: commands: - echo "Logging in to Amazon ECR..." - aws ecr get-login-password | docker login --username AWS --password-stdin <your-ecr-repo> build: commands: - docker build --cache-from <your-ecr-repo>:latest -t my-app . - docker push my-app
By pulling the latest image and using it as a cache source, Docker will reuse unchanged layers, significantly reducing build times.
Final Thoughts
Most AWS CodeBuild slowdowns aren’t caused by raw compute power—they’re caused by inefficient provisioning, unnecessary network latency, and poorly optimized Docker builds. By choosing modern runners, removing unnecessary VPC dependencies, using multi-stage builds, and leveraging caching, you can dramatically speed up your CodeBuild Docker build times without increasing costs.
On the pull side make sure to use SOCI if you are using Fargate to improve your pull times as well.

Improve your AWS CodeBuild speeds by using these tips