Many applications are packaged in OCI images but not in Guix. A good subset of them is written either in NodeJS, Go, Rust or languages that, as a general approach, encourage applications to have huge dependency graphs.
The Guix project accepts package contributions that comply to very strict standards in terms of whether the package and its dependencies can be completely built from source. It's the reason why practically no Javascript application (or even web applications with complex frontends) are in Guix mainline. It is not clear whether they will ever be.
OCI backed services
Yet the Guix System is completely usable for self hosting purposes. If you use docker compose
on the Guix System, you end up having two different interfaces to manage your system services: Shepherd and Docker/Podman. The oci-service-type
aims at implementing Shepherd Services that look and feel native (so you can configure and manage them with the usual consistent interface that Guix exposes) but under the hood are implemented as docker run
or podman run
invocations.
(simple-service 'oci-provisioning
oci-service-type
(oci-extension
(networks
(list
(oci-network-configuration (name "monitoring"))))
(containers
(list
(oci-container-configuration
(image "prom/prometheus")
(network "monitoring")
(ports
'(("9000" . "9000")
("9090" . "9090"))))
(oci-container-configuration
(image "docker.io/grafana/grafana:latest")
(network "monitoring")
(ports
'("3000:3000"))
(volumes
'("/var/lib/grafana" . "/var/lib/grafana")))))))
In this example two different Shepherd services are going to be added
to the system. Each oci-container-configuration
record translates to
a docker run
or podman run
invocation and its fields directly
map to options. You can refer to the Docker or Podman upstream documentation
for semantics of each value. If the images are not found, they will be pulled.
You can refer to the Docker or Podman upstream documentation for semantics.
A backend example
Let's start with a simple example, you can imagine this being the equivalent of your backend service or SQL process. Its behavior is quite simple: when someone sends an HTTP GET request for the /whosin
path at the port 7777
, the script returns out of office
and writes empty
into /tmp/office
:
$ curl localhost:7777/whosin
out of office
$ cat /tmp/office
empty
In case any other path is queried, it returns unknown path
and the requested path over HTTP and writes on fire
into /tmp/office
. This is what happens when the /lallo
path is requested:
$ curl localhost:7777/lallo && cat /tmp/office
unknown path /lallo
on fire
empty
We can now run the server in background with Shepherd, to check its behavior as a native process inside a Shepherd service:
$ herd spawn transient -- `pwd`/backend.scm /tmp
$ herd status | grep transient-
+ transient-198
$ herd status transient-198
● Status of transient-198:
It is transient, running since 07:24:53 PM (4 minutes ago).
Main PID: 17653
Command: backend.scm /tmp
It is enabled.
Provides: transient-198
Requires: transient
Will not be respawned.
As you can see the behavior is the same as the one the script manifests when run from shell as a regular command:
$ curl localhost:7777/whosin && cat /tmp/office
out of office
empty
The frontend script
You can imagine this being the equivalent of your frontend or some kind of middlweare service. Its behavior is: when someone sends an HTTP GET request for the /doorbell
path at the port 7778
, the frontend calls the backend at http://localhost:7777/whosin
, then reads the /tmp/office
file contents and returns everything via HTTP to the client. Let's try:
$ herd spawn transient -- `pwd`/frontend.scm /tmp http://localhost:7777/whosin
If we check now, we should have two different transient services, one for the backend and one for the frontend:
$ herd status | grep transient-
+ transient-198
+ transient-199
We can now test whether the code is sound:
$ curl localhost:7777/lallo && cat /tmp/office
unknown path /lallo
on fire
$ curl localhost:7778/doorbell
backend state: out of office
office state: empty
$ cat /tmp/office
empty
Now let's try running these scripts inside containers and see what changes from the native Shepherd services and the containerized ones.
An Home service example
Let's start by defining a Guile OCI image in our Guix Home configuration and gexps for the scripts:
(define guile-oci-image
(oci-image
(repository "guile")
(value
(specifications->manifest '("guile")))
(pack-options
'(#:symlinks (("/bin" -> "bin"))))))
(define (script-file script-name)
(local-file (string-append (getenv "HOME")
;; I happen to store these scripts in my HOME directory,
;; you should replace this with the directory where you
;; store your scripts.
"/" script-name ".scm")
(string-append script-name ".scm")))
(define backend-script
(script-file "backend"))
(define frontend-script
(script-file "frontend"))
And add the following to your Home services:
(service home-oci-service-type
(for-home
(oci-configuration
(runtime 'podman))))
(simple-service 'home-oci-provisioning
home-oci-service-type
(oci-extension
(networks
(list
(oci-network-configuration
(name "my-network"))))
(volumes
(list
(oci-volume-configuration
(name "my-volume"))))
(containers
(list
(oci-container-configuration
(provision "backend")
(image guile-oci-image)
(entrypoint "/bin/guile")
(network "my-network")
(command
'("-s" "/backend.scm" "/my-volume"))
(volumes
`(("my-volume" . "/my-volume")
(,backend-script . "/backend.scm"))))
(oci-container-configuration
(provision "frontend")
(image guile-oci-image)
(requirement '(backend))
(entrypoint "/bin/guile")
(network "my-network")
(command
'("-s" "/frontend.scm" "/my-volume" "http://backend:7777/whosin"))
(volumes
`(("my-volume" . "/my-volume")
(,frontend-script . "/frontend.scm"))))))))
You can now run guix home reconfigure ...
. After Guix Home is done, check the services' status:
$ herd status backend
● Status of backend:
It is running since 11:21:38 PM (9 seconds ago).
Main PID: 1756
Command: /home/paul/.guix-home/profile/bin/podman run --rm --replace --name backend --entrypoint /bin/guile --network my-network -v my-volume:/my-volume -v /gnu/store/qiqwy2j7a598wp8v68294fpbjmmahrqc-backend.scm:/backend.scm localhost/guile:latest -s /backend.scm /my-volume
It is enabled.
Provides: backend
Requires: home-podman-networks home-podman-volumes
Custom action: command-line
Will not be respawned.
Recent messages (use '-n' to view more or less):
2025-03-09 23:21:40 Copying config sha256:d111b778901b847eca5043a590e41c432a0471b3d8c3b75fe00fb2ad15088d59
2025-03-09 23:21:40 Writing manifest to image destination
2025-03-09 23:21:40 Loading image for backend from /gnu/store/2ndlvrpblk171qixkspywrrm4z5fah5n-backend.tar.gz...
2025-03-09 23:21:40 Loaded image: localhost/guile.latest:latest
2025-03-09 23:21:40 Tagged /gnu/store/2ndlvrpblk171qixkspywrrm4z5fah5n-backend.tar.gz with localhost/guile:latest...
and the same for the frontend:
$ herd status frontend
● Status of frontend:
It is running since 11:21:38 PM (36 seconds ago).
Main PID: 1757
Command: /home/paul/.guix-home/profile/bin/podman run --rm --replace --name frontend --entrypoint /bin/guile --network my-network -p 7778:7778 -v my-volume:/my-volume -v /gnu/store/m1avaw2681bbljxcidy7vb4x0i3898db-frontend.scm:/frontend.scm localhost/guile:latest -s /frontend.scm /my-volume http://backend:7777/whosin
It is enabled.
Provides: frontend
Requires: home-podman-networks home-podman-volumes backend
Custom action: command-line
Will not be respawned.
Recent messages (use '-n' to view more or less):
2025-03-09 23:21:40 Writing manifest to image destination
2025-03-09 23:21:40 Untagged: localhost/guile.latest:latest
2025-03-09 23:21:40 Loading image for frontend from /gnu/store/ak3pzlin4ay98p14blxaj7zgrv8fh632-frontend.tar.gz...
2025-03-09 23:21:40 Loaded image: localhost/guile.latest:latest
2025-03-09 23:21:40 Tagged /gnu/store/ak3pzlin4ay98p14blxaj7zgrv8fh632-frontend.tar.gz with localhost/guile:latest...
Now let's test the functionality. We only exposed the frontend port, so we expect not to be able to connect to 7777
:
$ curl localhost:7777/whosin
curl: (7) Failed to connect to localhost port 7777 after 0 ms: Couldn't connect to server
only to the frontend port (which is the 7778
):
$ curl localhost:7778/doorbell
backend state: out of office
office state: empty
We can check the office
file contents with:
$ podman volume export my-volume | tar xv
office
$ cat office
empty
Now, let's have a closer look at the Shepherd services. Shepherd services
provisioned by the oci-service-type
support different set of actions.
The network provisioning service provides the following action:
$ herd doc home-podman-networks list-actions
command-line:
Prints home-podman-networks OCI runtime command line invocation.
$ herd command-line home-podman-networks
/home/paul/.guix-home/profile/bin/podman network create my-network
the same goes for the volumes provisioning service:
$ herd doc home-podman-networks list-actions
command-line:
Prints home-podman-volumes OCI runtime command line invocation.
$ herd command-line home-podman-volumes
/home/paul/.guix-home/profile/bin/podman volume create my-volume
and for containers which reference a cached image, like in our case:
$ herd doc backend list-actions
command-line:
Prints backend OCI runtime command line invocation.
$ herd command-line backend
/home/paul/.guix-home/profile/bin/podman run --rm --replace --name backend --entrypoint /bin/guile --network my-network -v my-volume:/my-volume -v /gnu/store/qiqwy2j7a598wp8v68294fpbjmmahrqc-backend.scm:/backend.scm localhost/guile:latest -s /backend.scm /my-volume
OCI containers that have a remote image reference in their image
field,
additionally support a pull
action:
$ sudo herd doc podman-forgejo list-actions
command-line:
Prints podman-forgejo OCI runtime command line invocation.
pull:
Pull podman-forgejo image (codeberg.org/forgejo/forgejo:10.0.1-rootless).
$ sudo herd command-line podman-forgejo
/run/current-system/profile/bin/podman run --rm --replace --name podman-forgejo --env USER_UID=34595 --env USER_GID=98715 -p 3000:3000 -p 2202:22 -v forgejo:/var/lib/gitea -v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro codeberg.org/forgejo/forgejo:10.0.1-rootless
$ sudo tail -n 17 /var/log/forgejo.log
2025-03-09 23:50:11 Trying to pull codeberg.org/forgejo/forgejo:10.0.1-rootless...
2025-03-09 23:50:14 Getting image source signatures
2025-03-09 23:50:14 Copying blob sha256:15ab256cd4da0c6bf93a7cfa7e85e16dc9da5020c3415b630e59b09df76d27db
2025-03-09 23:50:14 Copying blob sha256:71d5a7a4eeb57275b451cfab8e904d9fa727a37f88fa3bc75942e1b4460acd44
2025-03-09 23:50:14 Copying blob sha256:66a3d608f3fa52124f8463e9467f170c784abd549e8216aa45c6960b00b4b79b
2025-03-09 23:50:14 Copying blob sha256:662951a9d959644cb6d446eed38ee5a42b231df7100397a93a5c2f22bf68712b
2025-03-09 23:50:14 Copying blob sha256:9cace756fe1f230966ec1022c17c33168f14c9e067e559c97950fdbfa2bba40b
2025-03-09 23:50:14 Copying blob sha256:d962a541e1a1144a3c08f44260b04c7e37cdd99db572c8ae6a8f54cd8f9eafbe
2025-03-09 23:50:14 Copying blob sha256:7dc8ff21196384b18f484da3eb1bde3064c7b506de71fdf45c6a12e7425d3bb9
2025-03-09 23:50:14 Copying blob sha256:82cdec355329fbe7dbbfeee91deca478d805ea4a20a07bf4f486beeb6ec9c342
2025-03-09 23:50:14 Copying blob sha256:c768a4796c3cea8130cd679fcdebbe3ec0c2d7399bc5e85758e20efb4eb834b6
2025-03-09 23:50:14 Copying blob sha256:43402951a99e9088f7eb19a737a806b81906305e15cd5bb0ecf1a1fa816da5f9
2025-03-09 23:50:14 Copying blob sha256:003e5af9ef5613ad37b723e8bf9fdbf80c2a656c9a61741941797ebcc1891cc3
2025-03-09 23:50:14 Copying blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
2025-03-09 23:50:14 Copying config sha256:55f1bcd32c34de7e1544e5dfac45208cce1a8bb298858ff61978ff858c7c5f6b
2025-03-09 23:50:14 Writing manifest to image destination
2025-03-09 23:50:14 55f1bcd32c34de7e1544e5dfac45208cce1a8bb298858ff61978ff858c7c5f6b
oci-container-service-type vs oci-service-type
The new oci-service-type
deprecates the oci-container-service-type
: it is
completely backward compatible and now, while deprecated, the
oci-container-service-type
is actually implemented extending the
oci-service-type
.
It brings additional features, such as: rootless podman support, the ability to provision networks and volumes, and better image caching.
To make the switch in service code you need to change your extension from
(service-extension oci-container-service-type
oci-bonfire-configuration->oci-container-configuration)
to
(service-extension oci-service-type
(lambda (config)
(oci-extension
(containers
(list
(oci-bonfire-configuration->oci-container-configuration config))))))
To make the switch in operating-system
records, you need to change from
(simple-service 'oci-containers
oci-container-service-type
(list
(oci-container-configuration
...)))
to
(simple-service 'oci-containers
oci-service-type
(oci-extension
(containers
(list
(oci-container-configuration
...)))))