Running a Go web service can be as simple as

  1. Compiling the binary

    go build ./cmd/hello
    
  2. Copying the binary to where you want to run it.

  3. Execute the binary

    ./hello
    

However, executing the binary manually is inconvenient when you need to restart the app on failure or auto start it on system reboot. In this post, I will show how I run a simple web service (code) with systemd or Docker on a Linux machine, and some decisions that I have made.

Things to consider before we start

Log to stdout

Since both systemd and Docker handles logs, it is easier to follow the advice from the 12-factor app and log to stdout.

Graceful shutdown

Systemd and Docker stop an application by sending a SIGTERM signal. The web service should be aware of this signal and clean up resources gracefully before stopping. One way to do this is to start the http.Server in a goroutine, and block on the signal in the main goroutine. I usually also add os.Interrupt for local development.

	server := http.Server{Addr: "127.0.0.1:8080", Handler: logMiddleware(mux)}

	go func() {
		slog.Info("starting service", "addr", server.Addr)
		err := server.ListenAndServe()
		if err != nil && !errors.Is(err, http.ErrServerClosed) {
			slog.Error("listen and serve", "addr", server.Addr, "error", err)
		}
	}()

	// Graceful shutdown
	shutdown := make(chan os.Signal, 1)
	signal.Notify(shutdown, os.Interrupt, syscall.SIGTERM)
	<-shutdown

	ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
	defer cancel()
	err := server.Shutdown(ctx)
	if err != nil {
		return fmt.Errorf("shutdown HTTP server: %w", err)
	}

Disable cgo if possible

If we compile the example app with go build ./cmd/hello and run ldd hello afterwards, the output looks like the following.

% ldd hello        
	linux-vdso.so.1 (0x00007fb7359f2000)
	libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fb7359ab000)
	libc.so.6 => /usr/lib/libc.so.6 (0x00007fb7357bb000)
	/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fb7359f4000)

This is due to that cgo is enabled on Linux by default and that the net package from the standard library uses a cgo-based DNS resolver. Therefore, the resulting binary depends on libc, and usually a specific major version of glibc. Which means, if the binary was built on Ubuntu 24.04, it will not run on Ubuntu 22.04 due to the wrong glibc version.

To avoid this problem, if the app is written in pure Go, we could compile the app with CGO_ENABLED=0 go build ./cmd/hello instead. This forces the use of the pure Go resolver and make the executable a static binary which could be runnable on different Linux systems.

Running as a systemd user unit

First, compile the executable and place it somewhere. I will put it inside /home/tzu-yu/hello/.

Then, create the directory to put systemd units.

mkdir -p ~/.config/systemd/user

Create a systemd unit file “hello.service” inside the created directory following a base example

[Unit]
Description=Hello service

[Service]
ExecStart=/home/tzu-yu/hello/hello
WorkingDirectory=/home/tzu-yu/hello
Restart=on-failure

[Install]
WantedBy=multi-user.target

The ExecStart is the path to the executable. I also set the WorkingDirectory because I usually place a config file alongside the executable and access it with relative path. Setting Restart to on-failure makes systemd restart the service automatically when the process crashes or exits with a non-zero status. The WantedBy=multi-user.target makes multi-user.target depend on our hello service, which means when enabled, this service will be started on boot.

Because we modified the unit definition, run the following command to make systemd read it. The --user is important because we are using an user unit.

systemctl --user daemon-reload

Then we can check the service status with the status subcommand, the argument is the name of the unit file. (Note that the .service suffix could be omitted if the name is unambiguous.)

% systemctl --user status hello.service    
○ hello.service - Hello service
     Loaded: loaded (/home/tzu-yu/.config/systemd/user/hello.service; disabled; preset: enabled)
     Active: inactive (dead)

We can then run the service by

systemctl --user start hello.service

Checking the status again, we can see that the application started

% systemctl --user status hello.service 
● hello.service - Hello service
     Loaded: loaded (/home/tzu-yu/.config/systemd/user/hello.service; disabled; preset: enabled)
     Active: active (running) since Mon 2025-07-28 02:27:16 CST; 2s ago
 Invocation: 21afaa3017e5419f8d1c3e1a2c842694
   Main PID: 33283 (hello)
      Tasks: 8 (limit: 18973)
     Memory: 2.8M (peak: 4.3M)
        CPU: 4ms
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/hello.service
             └─33283 /home/tzu-yu/hello/hello

Jul 28 02:27:16 arch systemd[2165]: Started Hello service.
Jul 28 02:27:16 arch hello[33283]: {"time":"2025-07-28T02:27:16.216418324+08:00","level":"INFO">
lines 1-13/13 (END)

We can stop or restart the service

systemctl --user stop hello.service 
systemctl --user restart hello.service 

The logs are handled by journald, which could be accessed via

journalctl --user-unit hello.service

Finally, to allow the service to start on boot, we enable it by

systemctl --user enable hello.service 

Running with Docker container

When the executable has to depend on C libraries, I found it easier to wrap the dependencies in a container, so we only need to control one OS.

Building a Docker image

Build command

The example project contains a Dockerfile in cmd/hello. We can build a Docker image from the project root with

docker build -t hello:dev -f ./cmd/hello/Dockerfile .

The -t hello:dev flag specifies the image name and tag. I tag the image with the corresponding Git tag when releasing, otherwise I use dev to avoid having a whole bunch of tags. The flag -f points to the Dockerfile. The argument . specifies the build context to be the current directory.

Writing a Dockerfile

There are two things to keep in mind when writing a Dockerfile to make the resulting image as small as possible and speed up the build process.

Multi-stage build

The environments for compiling the application and running the application are usually quite different. For instance, to compile a Go program, you not only need a Go compiler, but the source code and all the sources of its dependencies, all of which are not needed when running the executable. When using multiple FROM ... AS ... statements in a Dockerfile, we can create different stages to separate different environments. In this example, the first stage starts with the official Go image

FROM golang:1.24.5-bookworm@sha256:ef8c5c733079ac219c77edab604c425d748c740d8699530ea6aced9de79aea40 AS backend

and builds the Go binary. Then another stage is created to collect the artifacts. This stage exists because I usually have another stage that builds the frontend to JavaScript.

FROM busybox AS collect

The executable and only the executable is copied between stages with COPY --from=<stage>.

COPY --from=backend /build/hello /app/hello

Finally, the collected files are copied into a small image.

FROM gcr.io/distroless/static-debian12:nonroot@sha256:627d6c5a23ad24e6bdff827f16c7b60e0289029b0c79e9f7ccd54ae3279fb45f
COPY --from=collect /app /app

Separating the build into stages can help reducing the image size, also decouples statements so we can utilize the cache better.

Make good use of the build cache

Docker images consist of layers, and each of the layer is a set of filesystem changes. Each instruction in the Dockerfile usually corresponds to a layer. The docker build command caches the built layers, so the build process could be sped up by avoiding repetition.

Because the layers are changes, if the cache of layer 2 is invalidated, the instructions for layer 3, 4, etc. have to be rerun as well. Therefore, place the instructions that are less likely to need rerun closer to the top of the Dockerfile. For example, if your code depend on some system packages, both the following code have the same result.

  1. RUN apt-get update && apt-get install -y --no-install-recommends libopenblas-dev
    COPY . .
    RUN go build -o ./hello ./cmd/hello
    
  2. COPY . .
    RUN apt-get update && apt-get install -y --no-install-recommends libopenblas-dev
    RUN go build -o ./hello ./cmd/hello
    

However, the second version does not use cache effectively. When the code in the context directory changes, the apt-get update and apt-get install are guarenteed to be rerun. In other words, the frequently invalidated COPY . . line should appear as late as possible.

Splitting commands can also help utilizing the cache. When running go build, the Go toolchain will attempt to fetch Go dependencies referenced in the code. If you are building on a regular Linux machine, the dependencies are cached. However, when building in Docker, the cache does not exist in the base image, so the downloading process would be repeated in each build. Given that the set of dependent modules would change less frequently than the code, we can download them separately with go mod download.

COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o ./hello ./cmd/hello

The first line copies only the go.mod and go.sum into the build image, so this layer and the next layer will not be invalidated unless one of the two files changes. Therefore, the result of go mod download can be cached most of the time, building the image usually starts from COPY . .. Note that, for this to work properly, go mod tidy should be run regularly or whenever new dependencies are added.

Choosing base images

  1. Make sure to use only the official image. For example, when pulling debian image, do not use some-random-guy/debian unless absolutely necessary.

  2. When the executable links to libc, the build stage and the runtime stage should have a common OS major version. The official golang image is Debian-based (). I would choose the latest version with the newest OS version (1.24.5-bookworm at the moment of writing). Then the runtime image will either be a slim variant of Debian (debian:12.11-slim), or Debain 12 based distroless.

  3. I use distroless for runtime containers if I do not need apt-get when building them. I opt for the nonroot tag so that the executable does not run as root. However, beware that distroless does not contain a shell, so it is not suitable if you often access a running container with docker exec -it. Use the slim variant of debian instead.

    I choose which variant according to roughly:

    • If the binary is static (does not depend on libc): static-debian12.
    • If libc is needed: base-nossl-debian12. (Golang has its own TLS implementation.)
    • If something else like libstdc++ is needed: cc-debian12.
  4. I pin the base image version with both the tag and the digest. This is a habit I picked up from an article. Using the digest ensures that I am pulling the same image. This make things less flexible, as it specifies the same platform (amd64 for me), and forces the update to be manual. The tags are also included because they are easier for me to read. I try to use the full version tag if possible.

    FROM golang:1.24.5-bookworm@sha256:ef8c5c733079ac219c77edab604c425d748c740d8699530ea6aced9de79aea40 AS backend
    

Checking the image for vulnerabilities

Vulnerabilities could be found in the OS components inside Docker images. You can use trivy to scan the image and check against the known vulnerabilities on the database. This could be a guide whether you should update the base image.

Cleaning up

The built images and cache takes disk space, so we need to clean them up periodically. If you build new images to the same tag, you might start to see untagged image (showing <none>) when running docker image list.

REPOSITORY                   TAG       IMAGE ID       CREATED         SIZE
hello                        dev       0f057121088f   21 hours ago    10.6MB
<none>                       <none>    eb6e3ae5f350   22 hours ago    10.6MB

To remove those untagged images, run docker image prune periodically. However, this does not free the disk space because the layers are still referenced by some of the build cache. Run

docker buildx prune --filter until=120h

to remove the dangling build cache older than 5 days. You might want to set the time filter differently.

Running a Docker container

After creating the Docker image, we can run it with

docker run -d --restart always -p 8082:8080 --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 --name hello-service hello:dev

The important points to note are

  • The flag -d tells Docker to run the container in the background instead of the current terminal.
  • Remember to give the container a name, so it is easier to access later.
  • The default json-file log driver does not limit the maximum log size. To avoid the logs eat up the disk, we need to set the maximum amount to store explictly.
  • The --restart always sets the container to restart on service crash or system boot.
  • It is usually easier to group these into a docker compose file.

We can stop or restart the container with

docker stop hello-service
docker restart hello-service

We can inspect the logs with

docker logs -n 100 hello-service

Note that docker logs does not invoke a pager, so it dumps the whole logs into your terminal, which might be slow. The -n 100 flag limits the printing to the last 100 lines.

FROM golang:1.24.5-bookworm@sha256:ef8c5c733079ac219c77edab604c425d748c740d8699530ea6aced9de79aea40 AS backend

ARG CGO_ENABLED=0
RUN mkdir /build
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o ./hello ./cmd/hello

# Distroless does not have shell nor mkdir, so do these here
FROM busybox AS collect
RUN mkdir -p /app/admin_frontend/dist
COPY --from=backend /build/hello /app/hello

FROM gcr.io/distroless/static-debian12:nonroot@sha256:627d6c5a23ad24e6bdff827f16c7b60e0289029b0c79e9f7ccd54ae3279fb45f
COPY --from=collect /app /app
WORKDIR /app
CMD ["/app/hello"]