- Sun 13 April 2025
- programming
- Gaige B. Paulsen
- #programming, #gitlab, #docker, #kubernetes
Gitlab CI: Docker vs Kubernetes
Starting with a problem
When looking through some recent updates to the Cartographica dependencies, I noticed that some of the updates were not being automatically merged, despite having passed all the tests. All told, not really a big deal, since all I would need to do is maually merge the changes. However, it got a little more interesting when I noticed that the problem was a minute difference in code coverage between builds, despite no change in the code which was being checked for coverage. It's possible that there is some small change somewhere, but I wanted to see what I could do to iron this out.
My approach on this problem was twofold:
- Reduce the precision of the code coverage numbers
- Combine the code coverae reports using ReportGenerator
To date, ReportGenerator is the only tool I've found that can combine multiple Cobertura reports and maintain a high level of precision. Other tools will "combine" the reports by assuming they each test a different area (such as subroutines or modules). In the case of Cartographica, I collect test information from 3 sources:
- Cartographica unit tests on x86
- Cartographica unit tests on ARM
- Cartographica GUI tests
The first two are necessary because there are actual code differences between them. Some libraries never made the leap to ARM, and thus aren't supproted.
Dockerizing ReportGenerator
ReportGenerator is a .NET application, and I'm not particularly intersted in installing and maintaining a .NET infrastructure. As such, I decided to put the ReportGenerator application into a Docker container.
First, I checked to see if there was already a reasonable source. Dockerhub had images, but they were literally years old, and they were full OS images, so they contained all manner of cruft.
I decided that I'd make my own image, and make use of the base images maintained by ChainGuard. They have a number of wide selection of base images (and application images) that are built with security in mind. In most cases, they not only have reduced-attack-surface images, but they use distroless where possible.
The Dockerfile in question was a simple one:
# Build docker image containing .net runtime from chainguard and ReportGenerator application
# syntax=docker/dockerfile:1
FROM cgr.dev/chainguard/dotnet-sdk:latest-dev@sha256:aed050b3cb6e0d3c06e56e5ad15f1ce193bcefa4536550cf4daf81bad6da3b97 AS builder
ARG RG_VERSION=5.4.5
WORKDIR /app
RUN dotnet tool install dotnet-reportgenerator-globaltool --tool-path /app --version $RG_VERSION
# runtime image
FROM cgr.dev/chainguard/dotnet-runtime:latest-dev@sha256:7959c9804da649b8ffcbaafeb93ee264921bc34203b8b076def985efcc754011
WORKDIR /app
ENV DOTNET_ROOT=/usr/share/dotnet
ENV PATH="${PATH}:${DOTNET_ROOT}/tools:${DOTNET_ROOT}/tools/dotnet-tools"
COPY --from=builder /app /app
CMD ["/app/reportgenerator"]
# This is fine for k8s, but not for docker run for CI without manually overriding the exec command
ENTRYPOINT ["/app/reportgenerator"]
Note that I pulled in the dotnet-sdk image to build the ReportGenerator application, and then
I copied the application into the runtime image. This was necessary to make sure I had the appropriate
runtime on the runtime imags. You may also be wondering why I was using the latest-dev
tag instead
of the latest
tag. The reason is I wanted to run the automation inside of the container, which was
going to require a shell. If I'd used latest, I would have needed to either launch via docker-in-docker
or use some kind of sidecar.
Running ReportGenerator in Gitlab CI Docker vs Kubernetes
Generally, I've considered these two to be historically equivalent. As such, I
normally require docker
in CI files and then tag the kubernetes
runners with
both docker
and kubernetes
. In the past, this has worked fine, although what
I hadn't realized was that the handling of the containers was different on GitLab
between docker and kubernetes. The main difference is in how they handle the
ENTRYPOINT
command. In the case of docker, the ENTRYPOINT
command is
overridden by the command in the CI file only in the event that you explicitly
override it. In the case of Kubernetes, the ENTRYPOINT
command is always overridden by the command in the CI file. Effectively, this
means that for a GitLab CI script, you need to either use Kubernetes or you need
to pay attention to the ENTRYPOINT
command in the Dockerfile.
My original image was intended to be a distroless image for running the reports, which
would generally have meant an ENTRYPOINT
command. However, since I wanted to execute
the CI steps in the container (which is a common pattern), that wouldn't work
without some modifications to the CI file:
docker_test:
image:
name: "${CI_REGISTRY_IMAGE}/${CI_COMMIT_REF_SLUG}:${CI_COMMIT_SHA}"
entrypoint: ["/usr/bin/sh", "-c"]
I'd imagine most people won't run into this problem, as the requirement that the images used for GitLab CI includes a shell. As such most folks will just use an image that has a shell, and thus likely uses the shell as the entrypoint.
In this case, I know the shell is there, and we need to tell Docker where to go get it. This seems to be a good balance between needing to modify the existing entrypoint and having to restrict the processes to just kubernetes.
Conclusion
All told, a small bit of annoyance, thankfully mostly during the testing phase. I did have to adjust the image due to a long-running job failing bacause it happened to hit the docker runner instead of the kubernetes runner, which was the impetus for this whole exploration.