Pass Google Cloud Credentials to a Docker build

Wednesday, 20 Aug 2025·
Kevin L. Keys
Kevin L. Keys
,
Kevin L. Keys
· 2 min read

Problem

I recently had good reason to test a Docker container build using an internal Python repository. This is a common setup for companies that host internal packages in an artifact or package registry. The internal registry typically enables a uv pip install paradigm for packages not posted to the public PyPI. My problem was that the Docker container by default cannot see this registry, so RUN python3 -m pip install ${MY_INTERNAL_PACKAGE} will invariably fail. How could I pass authentication to a Docker build, ideally without saving the credentials in the container?

Gemini AIms low

My generation learned to outsource questions like this to Google. The modern modus operandi is to ask your favorite AI model for help. Thankfully, Google automatically shoehorns queries through Gemini now. But as I’ve learned to expect, Gemini digests a query like “docker pass google cloud application default credentials as secret during build” and excretes a putrid heap of nonsense:

Code:

# Dockerfile example using build secret
FROM ubuntu:latest

# Install gcloud CLI (if needed for your build process)
# ...

RUN --mount=secret,id=gcp_credentials \
cp /run/secrets/gcp_credentials /tmp/credentials.json && \
	export GOOGLE_APPLICATION_CREDENTIALS=/tmp/credentials.json && \
	# Your build commands that require GCP access
	gcloud auth activate-service-account --key-file=/tmp/credentials.json && \
	# ...
	rm /tmp/credentials.json # Clean up the secret after use

Build command:

docker build --secret id=gcp_credentials,src=/path/to/your/key.json .

This should correctly pass the secret to the Docker build.

This… doesn’t work. It’s also needlessly complicated (why copy credentials from a temporary location to another one, just to manually delete them later?).

A secret solution

In this case, AI-powered search somehow failed to turn over the actual Docker docs for doing this. First edit your Dockerfile to consume a secret in the appropriate RUN step:

# consume AWS credentials for using the AWS CLI
RUN --mount=type=secret,id=aws \
    AWS_SHARED_CREDENTIALS_FILE=/run/secrets/aws \
    aws s3 cp ...

Then pass the secret to the build job:

# pass suitable AWS credentials to docker build 
docker build --secret id=aws,src=$HOME/.aws/credentials .

I work in a Google Cloud environment, so my fix requires generating application default credentials first:

gcloud auth application-default login

The gcloud ADC authentication stashes a temporary token by default in ${HOME}/.config/gcloud/application_default_credentials.json. Then I edit my Dockerfile to listen for this secret:

#...
RUN --mount=type=secret,id=gcp export GOOGLE_APPLICATION_CREDENTIALS=/run/secrets/gcp && \
    python3 -m pip install --no-cache-dir --upgrade pip &&  \
    python3 -m pip install --no-cache-dir ${MY_INTERNAL_PACKAGE}==${VERSION} 
#...

Finally, I whisper the secret to the Docker build process:

docker buildx build \
	--build-arg ${MY_BUILD_ARG} \
	--platform ${MY_BUILD_PLATFORM} \
	-t ${MY_TAG} \
	--secret id=gcp,src=${HOME}/.config/gcloud/application_default_credentials.json \
	-f ${DOCKERFILE_PATH} \
	. 

And the build works!

Kevin L. Keys
Authors
Senior Data Scientist
Crunching genomic data for a brighter world.