Build development environments from a Dockerfile on Docker, Kubernetes, and OpenShift. Allow developers to modify their environment in a tight feedback loop.
- Supports
devcontainer.json
andDockerfile
- Cache image layers with registries for speedy builds
- Runs on Kubernetes, Docker, and OpenShift
The easiest way to get started is to run the envbuilder
Docker container that clones a repository, builds the image from a Dockerfile, and runs the $INIT_SCRIPT
in the freshly built container.
/tmp/envbuilder
is used to persist data between commands for the purpose of this demo. You can change it to any directory you want.
docker run -it --rm \
-v /tmp/envbuilder:/workspaces \
-e GIT_URL=https://github.com/coder/envbuilder-starter-devcontainer \
-e INIT_SCRIPT=bash \
ghcr.io/coder/envbuilder
Edit .devcontainer/Dockerfile
to add htop
:
$ vim .devcontainer/Dockerfile
- RUN apt-get install vim sudo -y
+ RUN apt-get install vim sudo htop -y
Exit the container, and re-run the docker run
command... after the build completes, htop
should exist in the container! 🥳
envbuilder uses Kaniko to build containers. You should follow their instructions to create an authentication configuration.
After you have a configuration that resembles the following:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "base64-encoded-username-and-password"
}
}
}
base64
encode the JSON and provide it to envbuilder as the DOCKER_CONFIG_BASE64
environment variable.
Alternatively, if running envbuilder
in Kubernetes, you can create an ImagePullSecret
and
pass it into the pod as a volume mount. This example will work for all registries.
# Artifactory example
kubectl create secret docker-registry regcred \
--docker-server=my-artifactory.jfrog.io \
--docker-username=read-only \
--docker-password=secret-pass \
[email protected] \
-n coder
resource "kubernetes_deployment" "example" {
metadata {
namespace = coder
}
spec {
spec {
container {
# Define the volumeMount with the pull credentials
volume_mount {
name = "docker-config-volume"
mount_path = "/envbuilder/config.json"
sub_path = ".dockerconfigjson"
}
}
# Define the volume which maps to the pull credentials
volume {
name = "docker-config-volume"
secret {
secret_name = "regcred"
}
}
}
}
}
Authenticate with docker login
to generate ~/.docker/config.json
. Encode this file using the base64
command:
$ base64 -w0 ~/.docker/config.json
ewoJImF1dGhzIjogewoJCSJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7CgkJCSJhdXRoIjogImJhc2U2NCBlbmNvZGVkIHRva2VuIgoJCX0KCX0KfQo=
Provide the encoded JSON config to envbuilder:
DOCKER_CONFIG_BASE64=ewoJImF1dGhzIjogewoJCSJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7CgkJCSJhdXRoIjogImJhc2U2NCBlbmNvZGVkIHRva2VuIgoJCX0KCX0KfQo=
GIT_USERNAME
and GIT_PASSWORD
are environment variables to provide Git authentication for private repositories.
For access token-based authentication, follow the following schema (if empty, there's no need to provide the field):
Provider | GIT_USERNAME |
GIT_PASSWORD |
---|---|---|
GitHub | [access-token] | |
GitLab | oauth2 | [access-token] |
BitBucket | x-token-auth | [access-token] |
Azure DevOps | [access-token] |
If using envbuilder inside of Coder, you can use the coder_external_auth
Terraform resource to automatically provide this token on workspace creation:
data "coder_external_auth" "github" {
id = "github"
}
resource "docker_container" "dev" {
env = [
GIT_USERNAME = data.coder_external_auth.github.access_token,
]
}
Cache layers in a container registry to speed up builds. To enable caching, authenticate with your registry and set the CACHE_REPO
environment variable.
CACHE_REPO=ghcr.io/coder/repo-cache
To experiment without setting up a registry, use LAYER_CACHE_DIR
:
docker run -it --rm \
-v /tmp/envbuilder-cache:/cache \
-e LAYER_CACHE_DIR=/cache
...
Each layer is stored in the registry as a separate image. The image tag is the hash of the layer's contents. The image digest is the hash of the image tag. The image digest is used to pull the layer from the registry.
The performance improvement of builds depends on the complexity of your Dockerfile. For coder/coder
, uncached builds take 36m while cached builds take 40s (~98% improvement).
When the base container is large, it can take a long time to pull the image from the registry. You can pre-pull the image into a read-only volume and mount it into the container to speed up builds.
# Pull your base image from the registry to a local directory.
docker run --rm \
-v /tmp/kaniko-cache:/cache \
gcr.io/kaniko-project/warmer:latest \
--cache-dir=/cache \
--image=<your-image>
# Run envbuilder with the local image cache.
docker run -it --rm \
-v /tmp/kaniko-cache:/image-cache:ro \
-e BASE_IMAGE_CACHE_DIR=/image-cache
In Kubernetes, you can pre-populate a persistent volume with the same warmer image, then mount it into many workspaces with the ReadOnlyMany
access mode.
The SETUP_SCRIPT
environment variable dynamically configures the user and init command (PID 1) after the container build process.
Note
TARGET_USER
is passed to the setup script to specify who will executeINIT_COMMAND
(e.g.,code
).
Write the following to $ENVBUILDER_ENV
to shape the container's init process:
TARGET_USER
: Identifies theINIT_COMMAND
executor (e.g.root
).INIT_COMMAND
: Defines the command executed byTARGET_USER
(e.g./bin/bash
).INIT_ARGS
: Arguments provided toINIT_COMMAND
(e.g.-c 'sleep infinity'
).
# init.sh - change the init if systemd exists
if command -v systemd >/dev/null; then
echo "Hey 👋 $TARGET_USER"
echo INIT_COMMAND=systemd >> $ENVBUILDER_ENV
else
echo INIT_COMMAND=bash >> $ENVBUILDER_ENV
fi
# run envbuilder with the setup script
docker run -it --rm \
-v ./:/some-dir \
-e SETUP_SCRIPT=/some-dir/init.sh \
...
SSL_CERT_FILE
: Specifies the path to an SSL certificate.SSL_CERT_DIR
: Identifies which directory to check for SSL certificate files.SSL_CERT_BASE64
: Specifies a base64-encoded SSL certificate that will be added to the global certificate pool on start.