My Next.js Docker Adventure - Confessions of a Frontend Developer
docker nextjs frontend react devopsA personal story about containerizing Next.js applications and the transformative impact on my development workflow
When Next.js Met Docker in My Life
"We need this in production by Friday."
It was Monday morning, and I had just completed a complex Next.js application that had taken weeks to build. The app was beautiful: server-side rendering, API routes, image optimization, internationalization—all the Next.js goodies. But I hadn't given a single thought to how it would be deployed.
Until that moment, my deployment strategy for Next.js had been simple: push to GitHub and let the magical Vercel platform handle everything. But this client wanted to host the application on their own infrastructure. No Vercel. No magical deployments. Just me, my code, and a deadline.
That's when I turned to Docker. I'd heard about it, even used it peripherally on other projects, but never really embraced it fully. This was the push I needed to dive into the world of containerizing Next.js applications—and what an enlightening journey it has been.
The First Stumbling Steps
Like any good developer, I started by copying a Dockerfile from Stack Overflow. How hard could it be?
FROM node:latest
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
CMD ["npm", "start"]
I built my image, ran the container, and... nothing happened. Well, something happened, but not what I expected. The container started and immediately stopped. No errors. Just silence.
After much debugging and reading Docker logs, I discovered several issues:
- The build was taking forever because I was copying
node_modules
and.next
into the image - The container wasn't exposing any ports
- Environment variables weren't being passed correctly
After many iterations and a lot of coffee, I arrived at a more functional Dockerfile:
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
It worked! But this was just the beginning of my journey.
The Next.js Deployment Mode Revelation
One of my first "aha" moments came when I realized Next.js has different deployment modes, each requiring a slightly different Docker approach:
- The Server: Using
next start
to run a Node.js server - The Static Export: Using
next export
for fully static sites - The Standalone Output: The newer
output: 'standalone'
option for optimized Node.js servers
I had been using the server approach without really understanding what I was doing. It was like driving a car without knowing how the engine works—fine until something breaks down.
My first project needed server-side rendering and API routes, so I stuck with the server approach. But I made a critical improvement to my Docker setup by implementing multi-stage builds:
# Build stage
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
EXPOSE 3000
CMD ["npm", "start"]
The size of my Docker image decreased dramatically, and the build became much faster. I felt like a Docker wizard! Until I tried to deploy to production and realized I'd missed something crucial.
The Production Security Wake-Up Call
"Your container is running as root."
The client's security team flagged this issue immediately. I had no idea what they were talking about at first. After some research, I learned about the security implications of running containers as the root user.
I modified my Dockerfile to create a non-root user:
FROM node:16-alpine AS builder
# ...same build stage as before...
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Add a non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
RUN chown -R nextjs:nodejs /app
# Copy built files
COPY --from=builder --chown=nextjs:nodejs /app/next.config.js ./
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nextjs:nodejs /app/package.json ./package.json
USER nextjs
EXPOSE 3000
CMD ["npm", "start"]
This was better, but when I looked at the size of the image, it was still over 1GB. For a Next.js app! There had to be a better way.
The Standalone Output Game-Changer
Reading through the Next.js documentation one night, I discovered the output: 'standalone'
option. This feature, introduced in Next.js 12, creates a minimal production server that includes only the necessary files.
I updated my next.config.js
:
/** @type {import('next').NextConfig} */
const nextConfig = {
output: "standalone"
};
module.exports = nextConfig;
And modified my Dockerfile:
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy only the necessary files
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
USER nextjs
EXPOSE 3000
ENV PORT 3000
ENV HOSTNAME "0.0.0.0"
CMD ["node", "server.js"]
The result was magical. My Docker image shrunk to under 200MB, startup times improved, and memory usage decreased. The client's infrastructure team was impressed, and I felt like I'd leveled up my deployment skills.
The Developer Experience Dilemma
While my production setup was improving, my development workflow was suffering. Every time I made a code change, I had to rebuild the entire Docker image. This was painfully slow and killed my productivity.
I needed a way to maintain the isolation benefits of Docker while preserving the fast feedback loop I was used to with Next.js.
The solution was a separate development Dockerfile and Docker Compose setup:
# Dockerfile.dev
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
# We'll mount the code as a volume
EXPOSE 3000
CMD ["npm", "run", "dev"]
And a docker-compose.yml
file:
version: "3"
services:
nextjs:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./:/app
- /app/node_modules
- /app/.next
environment:
- NODE_ENV=development
The first time I ran docker-compose up
and saw the Next.js dev server start, with hot reloading working perfectly, I almost cried with joy. I could have my Docker cake and eat it too!
Static Export: The Nginx Discovery
A few months later, I had a different project—a content-heavy Next.js site with no dynamic server requirements. I realized this was a perfect candidate for static export.
I modified my next.config.js
:
/** @type {import('next').NextConfig} */
const nextConfig = {
output: "export"
};
module.exports = nextConfig;
And created a new Dockerfile:
# Build stage
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Serve stage with Nginx
FROM nginx:alpine
COPY --from=builder /app/out /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
The resulting Docker image was tiny—less than 50MB! And since it was just static files served by Nginx, it was blazing fast and could handle enormous traffic with minimal resources.
This approach became my go-to solution for content-focused Next.js sites.
The Environment Variable Saga
Environment variables in Next.js are a special kind of challenge when working with Docker. I learned (the hard way) about the difference between build-time and runtime environment variables.
Variables prefixed with NEXT_PUBLIC_
are embedded in the JavaScript bundle at build time, while others are only available server-side at runtime.
After much trial and error, I developed a pattern for handling both types correctly in Docker:
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
# Build-time variables must be set during build
ARG NEXT_PUBLIC_API_URL
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
RUN npm run build
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Copy necessary files
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
# Runtime variables can be set here or passed at runtime
ENV DATABASE_URL=""
ENV REDIS_URL=""
USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
This approach allowed me to bake public variables into the image at build time while keeping sensitive variables configurable at runtime.
The Caching Breakthrough
As my Next.js projects grew larger, build times became increasingly painful. A colleague suggested using BuildKit's cache mounts to speed things up:
FROM node:16-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Use BuildKit cache for faster builds
RUN --mount=type=cache,target=/app/.next/cache \
NEXT_TELEMETRY_DISABLED=1 npm run build
# ...rest of Dockerfile as before...
After enabling BuildKit in Docker, my build times were cut in half! The cache persisted between builds, meaning Next.js didn't have to recompile unchanged pages.
Docker Compose for Full-Stack Applications
The real game-changer came when I started using Docker Compose for full-stack Next.js applications. One project required a Next.js frontend, a PostgreSQL database, and a Redis cache.
Instead of complex local setup instructions, I created a comprehensive Docker Compose configuration:
version: "3"
services:
nextjs:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./:/app
- /app/node_modules
- /app/.next
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/myapp
- REDIS_URL=redis://redis:6379
- NODE_ENV=development
depends_on:
- db
- redis
db:
image: postgres:13
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=myapp
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:6-alpine
ports:
- "6379:6379"
volumes:
postgres_data:
Onboarding new developers became trivial: clone the repo, run docker-compose up
, and they were ready to go. No more "it works on my machine" problems!
TypeScript and Testing in the Mix
As I integrated TypeScript and testing into my Docker workflow, I found ways to ensure type safety and test reliability in the containerized environment:
For TypeScript projects, I added type checking to the build stage:
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
# Run type checking before building
RUN npm run type-check
RUN npm run build
# ...rest of Dockerfile as before...
And for testing, I created a separate Dockerfile.test:
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["npm", "test"]
This ensured that tests ran in the same environment as the application, preventing the "it works locally but fails in CI" syndrome.
CI/CD: The Final Piece
The last part of my Docker journey with Next.js was setting up proper CI/CD pipelines. GitHub Actions became my tool of choice:
name: Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ghcr.io/${{ github.repository }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
This workflow automatically built, tested, and deployed my Docker images whenever I pushed to the main branch. It was the automation cherry on top of my containerization cake.
Where I Am Today
Looking back on my journey with Next.js and Docker, I'm amazed at how far I've come. What started as a desperate attempt to meet a deployment deadline has evolved into a comprehensive approach that has transformed how I build and deploy web applications.
Today, my workflow includes:
- Optimized production Dockerfiles with multi-stage builds
- Development environments with hot reloading
- Different strategies for different Next.js output modes
- Proper environment variable handling
- Build caching for faster iterations
- Comprehensive Docker Compose setups for full-stack applications
- Integrated testing and type checking
- Automated CI/CD pipelines
The benefits have been immense:
- Deployments are consistent and predictable
- Development environments match production closely
- Onboarding new team members is simple and fast
- Infrastructure requirements are documented as code
- Scaling is straightforward
If you're a Next.js developer still deploying the old way, I encourage you to give Docker a try. There's a learning curve, but the productivity and peace of mind you'll gain are well worth the investment.
Remember, containerization isn't just about deployment—it's about creating a consistent, reproducible environment for your application throughout its lifecycle. Once you experience the confidence that comes from knowing "if it works in the container, it works everywhere," you'll never want to go back.
Happy containerizing your Next.js applications!