Error cgroup using docker plugin

Hi

I’m using drone on debian testing machine and since fews days I encountered the following error:

---> Running in 735f5efc0fd0
132 cgroups: cgroup mountpoint does not exist: unknown
133 time="2021-03-15T12:50:25Z" level=fatal msg="exit status 1"

It think docker was updated on my server.
The pipeline was working perfectly last month.

I tried to run docker build manually with the same Dockerfile than in the pipeline and it worked fine.
I think I missed something but I don’t know what.
Can you help me or give me some clues for investigation ?

Here is the docker plugin pipeline step:

- name: dockerise
  image: plugins/docker
  network_mode: host
  settings:
    registry: x.x.x.x:yyyy
    repo: x.x.x.x:yyyy/myrepo
    debug: false
    insecure: true
    force_tag: true
    build_args:
      - SERVER_PORT=pppp
    username:
      from_secret: registry_username
    password:
      from_secret: registry_password
    tags:
      - latest

I undestand the problem comes from cgroup v2, but I don’t see a way to make it work with docker plugin

Thank you for your support :wink:

Tang

Here is the complete debug logs of docker plugin

+ /usr/local/bin/********d --data-root /var/lib/******** --host=unix:///var/run/********.sock --insecure-registry 192.168.1.1:8662 0s
2 time="2021-03-15T20:21:49.608150778Z" level=info msg="Starting up" 0s
3 time="2021-03-15T20:21:49.613709821Z" level=warning msg="could not change group /var/run/********.sock to ********: group ******** not found" 0s
4 time="2021-03-15T20:21:49.624955531Z" level=info msg="libcontainerd: started new containerd process" pid=33 0s
5 time="2021-03-15T20:21:49.626041588Z" level=info msg="parsed scheme: \"unix\"" module=grpc 0s
6 time="2021-03-15T20:21:49.626062262Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 0s
7 time="2021-03-15T20:21:49.626090720Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/********/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc 0s
8 time="2021-03-15T20:21:49.626132344Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 0s
9 time="2021-03-15T20:21:49.822230776Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13 0s
10 time="2021-03-15T20:21:49.823334053Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 0s
11 time="2021-03-15T20:21:49.823456498Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1 0s
12 time="2021-03-15T20:21:49.823650819Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/********/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 0s
13 time="2021-03-15T20:21:49.823678436Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1 0s
14 time="2021-03-15T20:21:49.837094319Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1" 0s
15 time="2021-03-15T20:21:49.837131525Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 0s
16 time="2021-03-15T20:21:49.837411263Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 0s
17 time="2021-03-15T20:21:49.837750718Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 0s
18 time="2021-03-15T20:21:49.838670993Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 0s
19 time="2021-03-15T20:21:49.838690085Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 0s
20 time="2021-03-15T20:21:49.838901383Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/********/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" 1s
21 time="2021-03-15T20:21:49.838917295Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/********/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 1s
22 time="2021-03-15T20:21:49.838925586Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1" 1s
23 time="2021-03-15T20:21:49.846164076Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 1s
24 time="2021-03-15T20:21:49.846249602Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 1s
25 time="2021-03-15T20:21:49.846617320Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 1s
26 time="2021-03-15T20:21:49.846685625Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 1s
27 time="2021-03-15T20:21:49.846722898Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 1s
28 time="2021-03-15T20:21:49.846783235Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 1s
29 time="2021-03-15T20:21:49.846834848Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 1s
30 time="2021-03-15T20:21:49.846874950Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 1s
31 time="2021-03-15T20:21:49.846915737Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 1s
32 time="2021-03-15T20:21:49.846989526Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 1s
33 time="2021-03-15T20:21:49.847318518Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 1s
34 time="2021-03-15T20:21:49.847526940Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 1s
35 time="2021-03-15T20:21:49.849802406Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 1s
36 time="2021-03-15T20:21:49.849854159Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 1s
37 time="2021-03-15T20:21:49.849904618Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 1s
38 time="2021-03-15T20:21:49.849924395Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 1s
39 time="2021-03-15T20:21:49.849938776Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 1s
40 time="2021-03-15T20:21:49.849955079Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 1s
41 time="2021-03-15T20:21:49.849968410Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 1s
42 time="2021-03-15T20:21:49.849983500Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 1s
43 time="2021-03-15T20:21:49.849997400Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 1s
44 time="2021-03-15T20:21:49.850015213Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 1s
45 time="2021-03-15T20:21:49.850030942Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 1s
46 time="2021-03-15T20:21:49.850755772Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 1s
47 time="2021-03-15T20:21:49.850782743Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 1s
48 time="2021-03-15T20:21:49.850801831Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 1s
49 time="2021-03-15T20:21:49.850819027Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 1s
50 time="2021-03-15T20:21:49.851613250Z" level=info msg=serving... address="/var/run/********/containerd/containerd-debug.sock" 1s
51 time="2021-03-15T20:21:49.851697002Z" level=info msg=serving... address="/var/run/********/containerd/containerd.sock" 1s
52 time="2021-03-15T20:21:49.851718775Z" level=info msg="containerd successfully booted in 0.030195s" 1s
53 time="2021-03-15T20:21:49.874123723Z" level=info msg="parsed scheme: \"unix\"" module=grpc 1s
54 time="2021-03-15T20:21:49.875506383Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 1s
55 time="2021-03-15T20:21:49.875769656Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/********/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc 1s
56 time="2021-03-15T20:21:49.875997391Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 1s
57 time="2021-03-15T20:21:49.877226560Z" level=info msg="parsed scheme: \"unix\"" module=grpc 1s
58 time="2021-03-15T20:21:49.877256553Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 1s
59 time="2021-03-15T20:21:49.877282252Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/********/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc 1s
60 time="2021-03-15T20:21:49.877311998Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 1s
61 time="2021-03-15T20:21:49.934587730Z" level=warning msg="Your kernel does not support swap memory limit" 1s
62 time="2021-03-15T20:21:49.934619060Z" level=warning msg="Your kernel does not support memory reservation" 1s
63 time="2021-03-15T20:21:49.934628763Z" level=warning msg="Your kernel does not support oom control" 1s
64 time="2021-03-15T20:21:49.934699952Z" level=warning msg="Your kernel does not support memory swappiness" 1s
65 time="2021-03-15T20:21:49.934760216Z" level=warning msg="Your kernel does not support kernel memory limit" 1s
66 time="2021-03-15T20:21:49.934801789Z" level=warning msg="Your kernel does not support kernel memory TCP limit" 1s
67 time="2021-03-15T20:21:49.934838796Z" level=warning msg="Your kernel does not support cgroup cpu shares" 1s
68 time="2021-03-15T20:21:49.934955025Z" level=warning msg="Your kernel does not support cgroup cfs period" 1s
69 time="2021-03-15T20:21:49.935200152Z" level=warning msg="Your kernel does not support cgroup cfs quotas" 2s
70 time="2021-03-15T20:21:49.935216299Z" level=warning msg="Your kernel does not support cgroup rt period" 2s
71 time="2021-03-15T20:21:49.935233125Z" level=warning msg="Your kernel does not support cgroup rt runtime" 2s
72 time="2021-03-15T20:21:49.935262064Z" level=warning msg="Unable to find blkio cgroup in mounts" 2s
73 time="2021-03-15T20:21:49.936868719Z" level=info msg="Loading containers: start." 2s
74 time="2021-03-15T20:21:50.287519386Z" level=info msg="Default bridge (********0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" 2s
75 time="2021-03-15T20:21:50.479218833Z" level=info msg="Loading containers: done." 2s
76 time="2021-03-15T20:21:50.573741435Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 2s
77 time="2021-03-15T20:21:50.573990954Z" level=info msg="Daemon has completed initialization" 2s
78 time="2021-03-15T20:21:50.632089638Z" level=info msg="API listen on /var/run/********.sock" 2s
79 Detected registry credentials 2s
80 + /usr/local/bin/******** version 2s
81 Client: Docker Engine - Community 2s
82 Version: 19.03.8 2s
83 API version: 1.40 2s
84 Go version: go1.12.17 2s
85 Git commit: afacb8b7f0 2s
86 Built: Wed Mar 11 01:22:56 2020 2s
87 OS/Arch: linux/amd64 2s
88 Experimental: false 2s
89 2s
90 Server: Docker Engine - Community 2s
91 Engine: 2s
92 Version: 19.03.8 2s
93 API version: 1.40 (minimum version 1.12) 2s
94 Go version: go1.12.17 2s
95 Git commit: afacb8b7f0 2s
96 Built: Wed Mar 11 01:30:32 2020 2s
97 OS/Arch: linux/amd64 2s
98 Experimental: false 2s
99 containerd: 2s
100 Version: v1.2.13 2s
101 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 2s
102 runc: 2s
103 Version: 1.0.0-rc10 2s
104 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd 2s
105 ********-init: 2s
106 Version: 0.18.0 2s
107 GitCommit: fec3683 2s
108 + /usr/local/bin/******** info 2s
109 Client: 2s
110 Debug Mode: false 2s
111 2s
112 Server: 2s
113 Containers: 0 2s
114 Running: 0 2s
115 Paused: 0 2s
116 Stopped: 0 2s
117 Images: 0 2s
118 Server Version: 19.03.8 2s
119 Storage Driver: overlay2 2s
120 Backing Filesystem: <unknown> 2s
121 Supports d_type: true 2s
122 Native Overlay Diff: true 2s
123 Logging Driver: json-file 2s
124 Cgroup Driver: cgroupfs 2s
125 Plugins: 2s
126 Volume: local 2s
127 Network: bridge host ipvlan macvlan null overlay 2s
128 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog 2s
129 Swarm: inactive 2s
130 Runtimes: runc 2s
131 Default Runtime: runc 2s
132 Init Binary: ********-init 2s
133 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429 2s
134 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd 2s
135 init version: fec3683 2s
136 Security Options: 2s
137 seccomp 2s
138 Profile: default 2s
139 Kernel Version: 5.10.0-4-amd64 2s
140 Operating System: Alpine Linux v3.11 3s
141 OSType: linux 3s
142 Architecture: x86_64 3s
143 CPUs: 2 3s
144 WARNING: No swap limit support 3s
145 WARNING: No kernel memory limit support 3s
146 WARNING: No kernel memory TCP limit support 3s
147 WARNING: No oom kill disable support 3s
148 WARNING: No cpu cfs quota support 3s
149 WARNING: No cpu cfs period support 3s
150 WARNING: No cpu shares support 3s
151 Total Memory: 1.785GiB 3s
152 Name: tangserver 3s
153 ID: GH7K:4HII:6QQV:YVRM:XQB2:RSTA:X6O2:55FO:LVHM:2CIO:K7HY:5M6O 3s
154 Docker Root Dir: /var/lib/******** 3s
155 Debug Mode: false 3s
156 Registry: https://index.********.io/v1/ 3s
157 Labels: 3s
158 Experimental: false 3s
159 Insecure Registries: 3s
160 192.168.1.1:8662 3s
161 127.0.0.0/8 3s
162 Live Restore Enabled: false 3s
163 Product License: Community Engine 3s
164 3s
165 + /usr/local/bin/******** build --rm=true -f Dockerfile -t 24e7c898dcd0eff46c3c892ee8cc0dd85163443f . --pull=true --build-arg SERVER_PORT=9101 --label org.opencontainers.image.created=2021-03-15T20:21:50Z --label org.opencontainers.image.revision=24e7c898dcd0eff46c3c892ee8cc0dd85163443f --label org.opencontainers.image.source=https://github.com/tangb/mg2l-backend.git --label org.opencontainers.image.url=https://github.com/tangb/mg2l-backend 3s
166 Sending build context to Docker daemon 299MB 50s
167 Step 1/16 : FROM node:alpine as BASE 50s
168 alpine: Pulling from library/node 51s
169 e95f33c60a64: Pulling fs layer 52s
170 bbf10ae0e36d: Pulling fs layer 52s
171 653df6d07dfe: Pulling fs layer 52s
172 c4e710be8028: Pulling fs layer 52s
173 c4e710be8028: Waiting 52s
174 e95f33c60a64: Verifying Checksum 53s
175 e95f33c60a64: Download complete 53s
176 e95f33c60a64: Pull complete 53s
177 653df6d07dfe: Verifying Checksum 53s
178 653df6d07dfe: Download complete 53s
179 c4e710be8028: Verifying Checksum 53s
180 c4e710be8028: Download complete 53s
181 bbf10ae0e36d: Verifying Checksum 54s
182 bbf10ae0e36d: Download complete 54s
183 bbf10ae0e36d: Pull complete 56s
184 653df6d07dfe: Pull complete 56s
185 c4e710be8028: Pull complete 56s
186 Digest: sha256:1aa4d551d84797a2df6261e4b6a78b849f6bd11b0dc94b7f2ddf0023fecd8261 56s
187 Status: Downloaded newer image for node:alpine 56s
188 ---> 8bf655e9f9b2 56s
189 Step 2/16 : WORKDIR /app 56s
190 ---> Running in aa986e037bb5 57s
191 Removing intermediate container aa986e037bb5 57s
192 ---> 7b40bf7637e4 57s
193 Step 3/16 : COPY dist/. /app/ 57s
194 ---> e495ab0db628 57s
195 Step 4/16 : COPY package*.json /app/ 57s
196 ---> 553e6b1c5320 57s
197 Step 5/16 : COPY preproduction.env /app/ 57s
198 ---> 31782c185698 58s
199 Step 6/16 : ARG SERVER_PORT 58s
200 ---> Running in bc771cb9178c 58s
201 time="2021-03-15T20:22:47.133689595Z" level=info msg="Layer sha256:f628ee754501eed4b6bc45c7d36dbe8089d4c94811c6fe3c5f033bc03d907b09 cleaned up" 58s
202 Removing intermediate container bc771cb9178c 58s
203 ---> fe36a0ee8e4c 58s
204 Step 7/16 : ENV NODE_ENV=preproduction 58s
205 ---> Running in f4e99004a27d 58s
206 time="2021-03-15T20:22:47.235307532Z" level=info msg="Layer sha256:f628ee754501eed4b6bc45c7d36dbe8089d4c94811c6fe3c5f033bc03d907b09 cleaned up" 58s
207 Removing intermediate container f4e99004a27d 58s
208 ---> d5e63a790303 58s
209 Step 8/16 : RUN echo "SERVER_PORT ${SERVER_PORT}" 58s
210 ---> Running in 9aa6f879fefa 58s
211 time="2021-03-15T20:22:47.411894887Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9aa6f879fefa8e32108f313ae421abd88da845c775496ebba9e0e88d97af9d46/shim.sock" debug=false pid=336 58s
212 time="2021-03-15T20:22:47.734448584Z" level=info msg="shim reaped" id=9aa6f879fefa8e32108f313ae421abd88da845c775496ebba9e0e88d97af9d46 58s
213 time="2021-03-15T20:22:47.852950535Z" level=error msg="9aa6f879fefa8e32108f313ae421abd88da845c775496ebba9e0e88d97af9d46 cleanup: failed to delete container from containerd: no such container" 58s
214 cgroups: cgroup mountpoint does not exist: unknown 59s
215 time="2021-03-15T20:22:48Z" level=fatal msg="exit status 1"

And here the result of docker info command:

     Client:
     Context:    default
     Debug Mode: false
     Plugins:
      app: Docker App (Docker Inc., v0.9.1-beta3)
      buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

    Server:
     Containers: 6
      Running: 6
      Paused: 0
      Stopped: 0
     Images: 23
     Server Version: 20.10.5
     Storage Driver: overlay2
      Backing Filesystem: extfs
      Supports d_type: true
      Native Overlay Diff: true
     Logging Driver: json-file
     Cgroup Driver: systemd
     Cgroup Version: 2
     Plugins:
      Volume: local
      Network: bridge host ipvlan macvlan null overlay
      Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
     Swarm: inactive
     Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
     Default Runtime: runc
     Init Binary: docker-init
     containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
     runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
     init version: de40ad0
     Security Options:
      apparmor
      seccomp
       Profile: default
      cgroupns
     Kernel Version: 5.10.0-4-amd64
     Operating System: Debian GNU/Linux bullseye/sid
     OSType: linux
     Architecture: x86_64
     CPUs: 2
     Total Memory: 1.785GiB
     Name: tangserver
     ID: XVU2:EJ6V:VCC2:L6ZM:FJMU:KFXJ:HPB5:YPLR:NBQR:QNGK:3ZDJ:2BBQ
     Docker Root Dir: /home/docker
     Debug Mode: false
     Registry: https://index.docker.io/v1/
     Labels:
     Experimental: false
     Insecure Registries:
      192.168.1.2:5000
      192.168.1.1:8662
      127.0.0.0/8
     Live Restore Enabled: false

    WARNING: Support for cgroup v2 is experimental

Is there some reason to believe the problem comes from difference between host docker version 20.10.5 and used in drone docker plugin 19.03.8 ?

Maybe the fact is debian enables by default cgroup v2 and drone docker plugin is not yet compatible with host configuration ?

I’m really confused about this problem and I don’t know what to do.
Any help will be really appreciated :sweat_smile:

Thanks again

The docker plugin is a small wrapper around the official docker:dind image [1]. I would therefore recommend running the official docker:dind container on your host, collecting the error message in the logs, and then report to the docker team (see sample commands below). The docker maintainers will be the most qualified to help you troubleshoot problems with docker-in-docker.

$ docker run --privileged --name dind -d docker:19.03.8-dind
$ sleep 15
$ docker logs dind

[1] drone-docker/Dockerfile.linux.amd64 at master · drone-plugins/drone-docker · GitHub

Thank you very much for your answer.

I finally completely drop docker plugin usage and switch to DooD method.
As my repo is completely private, I should not encounter security issue :wink:

Just a question about docker plugin, is there a reason to force docker:dind image to 19.03 instead of using latest one (that changed to 20.10.5 few days ago) ?

Thank you again

I haven’t looked at the latest release, however, 19.03 is the last dind image that includes 32-bit arm support. Upgrading dind to 20.x would break 32-bit arm pipelines. There is an open issue in the Docker repositroy, and they have a fix planned, which is what we are waiting for before upgrading.

1 Like

Sorry, I got same problem too. So…It still not solved?

Same problem here.

Is there documentation/examples on alternative ways to build and push the docker image?

Documentation on how to build the Docker image: GitHub - drone-plugins/drone-docker: Drone plugin for publishing Docker images using Docker-in-Docker

The active ticket where this Drone issue is being worked, including a link to an alternate image someone is hosting (use at your own risk): Update Docker dind to 20.10.7-dind by prologic · Pull Request #327 · drone-plugins/drone-docker · GitHub