Drone

Build Docker image and re-use in the next step - Kubernetes

Hi @bradrydzewski, @ashwilliams1,

Is there any way to build image docker in Kubernetes and pass it to next step?
I saw this article Faq Build Docker image and re-use in the next step but its for type:Docker.
Is this can be done for Kube as well without mounting host volume /var/run/docker or lib?
Currently what I am doing is to push the image to ECR and next step download it but its consume a lot of time for the build.
and which cache I can use to reduce build time for images?

Docker images are stored on the host machine docker cache. I am not aware of any generic way to build a docker image in one pod / container, and then use the same docker image in another pod / container, without either a) mounting the host machine docker socket or b) pushing the image to a registry so that it can be downloaded and re-used. I am not aware of any kubernetes-native CI systems that is able to solve for this use case, without using the methods previously described, due to the limitations of kubernetes itself.

Understand, does it possible for pod to read from
volumes: [{
name: ‘dockersock’,
temp: {},
},
when its mount to step and to use pull: never, I tried that but its fails.
This will increase the performance of the builds

hmm, not sure I fully understand. The docker images are only cached on the host node, so creating a temporary volume mount would not have any impact.

what about using cache_from to make your docker builds run faster?

I am using cache_from but I want to save the image pull time for step and not to send each image to registry if its not pass all unit-test.
What about creating DaemonSet for DockerInDocker on each node and drone will talk with this service instead of using docker service in pipeline with temp: {}

Does Drone support BuildKit? If I will have Buildkit pods does drone can communicate with it?

when you are using the Kubernetes runner, Drone creates a standard Kubernetes Pod, where each step in your pipeline is a Container in the Pod. So in this case, Drone is only communicating with Kubernetes using its API to create the Pod and return the container logs and exit codes. Everything else is being controlled by Kubernetes.

As an example, Kubernetes responsible for pulling images, not Drone (Drone creates the Pod, but Kubernetes decides if and when to pull the image). In terms of support for BuildKit, I presume one could create a custom plugin that supports BuildKit. People have created plugins for alternate build engines, such as the kaniko plugin.

I am seeing Codefresh support build layer cache, does drone can also support such architecture?

You could use a third party tool like Makisu, a distributed layer cache that Uber created specifically for its continuous integration system:

You could use buildx command directly (as you mentioned) with a distributed cache plugin. You would need to mount the host machine docker socket or run docker in docker as shown below. You may also be able to use a buildx kubernetes plugin.

---
kind: pipeline
type: kubernetes
name: default

steps:
- name: test
  image: docker:dind
  volumes:
  - name: dockersock
    path: /var/run
  commands:
  - sleep 5 # give docker enough time to start
  - docker ps -a
  - docker buildx build -t ...
  - docker run ...
  - docker push ...

services:
- name: docker
  image: docker:dind
  privileged: true
  volumes:
  - name: dockersock
    path: /var/run

volumes:
- name: dockersock
  temp: {}

Furthermore, the docker-in-docker layers are stored in a directory and can therefore be cached and restored using a cache plugin. The S3 cache plugin, for example, can cache and restore artifacts using S3 buckets. If you search the plugin index for the term cache you can find a few different plugin options.

Lastly I would point out that the codafresh cache only helps cache docker layers. It does not solve the problem we have been discussing where you want to build an image in re-use in the next step. For this to work, we need to interact directly with the daemon by mounting the host machine socket. Or alternatively you need to use docker-in-docker and interact directly with the daemon to build images and run containers.

this is my drone.yml , building the container and push it to registry cost time+storage
and require me to push dirty image to ECR if the test fails

---
kind: pipeline
name: default
type: kubernetes

platform:
  os: linux
  arch: amd64

steps:
- name: publish-to-ecr-branch-name-branch-build
  pull: if-not-exists
  image: plugins/ecr
  settings:
    build_args:
    - SHADOW_CONF=/shadow_collector/config/vpc_config.json
    build_args_from_env:
    - PIP_EXTRA_INDEX_URL
    dockerfile: ./Dockerfile
    region: us-west-1
    registry: xxxxxx.dkr.ecr.us-west-1.amazonaws.com
    repo: xxxxx.dkr.ecr.us-west-1.amazonaws.com/xxxx/xxxx
    tags:
      - ${DRONE_BRANCH}
      - ${DRONE_BRANCH}-${DRONE_COMMIT_AUTHOR}
    cache_from:
      - xxxxx.dkr.ecr.us-west-1.amazonaws.com/xxxxx/xxxxx:${DRONE_BRANCH}
  environment:
    PIP_EXTRA_INDEX_URL:
      from_secret: pip_extra_index_url
  when:
    event:
    - push

- name: db-migrations
  pull: always
  image: xxxxxxxx.dkr.ecr.us-west-1.amazonaws.com/xxxxx/xxxxxxx:${DRONE_BRANCH}-${DRONE_COMMIT_AUTHOR}
  commands:
  - "curl -X POST 'http://127.0.0.1:8085/api/v1/shadow-it/tenant' -H  'admin-id: local-admin' -H 'Content-Type: application/json' --data '{\"name\": \"test_tenant\", \"email\": \"test_tenant@xxxxxx.com\", \"expiration\": \"2027-07-07T14:58:24.070Z\"}'"
  - "curl -X POST 'http://127.0.0.1:8041/api/v1/shadow-it/admin/create_db' -H 'Content-Type: application/json' --data '{\"tenant_id\":\"test_tenant\"}'"

- name: api-run
  pull: if-not-exists
  image: xxxxxx.dkr.ecr.us-west-1.amazonaws.com/xxxxxxx/xxxxxxr:${DRONE_BRANCH}-${DRONE_COMMIT_AUTHOR}
  detach: true
  commands:
  - uwsgi --ini ./ops/etc/uwsgi.ini --http :8040 --workers 1
  environment:
    SHADOW_CONF: ./config/config.json

@ashwilliams1,
Can you pls give an exampke how to use docker service with S3 Cache plugin, I am not able to setup it, I want to run a docker service for my pipeline in order the build use the cache from S3 how can I do it?

I do not have an example off the top of my head, however, it sounds like you were previously using the docker runner and are trying to upgrade to kubernetes. If the docker runner was working well, have you considered reverting back to using it?

I am no kubernetes expert, but you should be able to setup the docker runner on Kubernetes and use docker-in-docker (example below) to avoid exposing the host node’s docker socket.

    spec:
      containers:
      - name: dind
        image: docker:dind
        args:
          - dockerd
          - --storage-driver=overlay2
          - -H unix:///var/run/docker.sock
        securityContext:
          privileged: true
        volumeMounts:
          - name: dockersock
            mountPath: /var/run
      - name: runner
        image: drone/drone-runner-docker:1
        envs:
          - name: DRONE_RPC_HOST
            value: drone.company.com
          - name: DRONE_RPC_PROTO
            value: https
          - name: DRONE_RPC_SECRET
            value: password
        volumeMounts:
          - name: dockersock
            mountPath: /var/run
      volumes:
        - name: dockersock
          emptyDir: {}

What setup were you using previously? Perhaps this was the optimal setup for your organization and switching to the new kubernetes runner does not make sense?

@ashwilliams1 I am able to run this configuration on the Kube, I am seeing thats the docker-runner is not stable - 2 builds success 1 fails for the same build and I am seeing the performance are very slow is there any configuration , envs var which I can pass to make it stronger and more reliable? the pipeline is build image and run integration test and its taking a lot of time and not stable.

When I look on dind I got error “cannot allocate memory event=oom”
I put the following vars:
DRONE_RUNNER_CAPACITY: 3
DRONE_DOCKER_STREAM_PULL: false
DRONE_LIMIT_MEM_SWAP: “4096000000”
DRONE_LIMIT_MEM: “512000000”
DRONE_CPU_SHARES: 4096

its working but VERY slow and only 3 build a time
container spec:
resources:
limits:
cpu: 2024m
memory: 2024Mi
requests:
cpu: 1024m
memory: 1024Mi

dind:
limits:
cpu: 3048m
memory: 6000Mi
requests:
cpu: 1024m
memory: 1024Mi

@ihakimi what configuration were you previously using before you tried migrating to the kubernetes runner? My understanding is you previously had something that was working quite well, and that is the configuration I am suggesting reverting back to. It would be helpful for us to see this previous configuration. Can you provide the spec of your old configuration that we can see?

When I look on dind I got error “cannot allocate memory event=oom”

the pipeline is executed inside the dind container. If you are receiving this error message it would imply the dind container does not have enough memory available.

its working but VERY slow

I noticed you have allocated a very small amount of cpu and ram to your dind container. Since your pipelines are executing inside the dind container, this really needs more memory and cpu to be effective. Right now it looks like you have the runner configured to executed 3 concurrent pipelines, which are all fighting over 1024Mi shared ram which is not a lot. The runner container should require much less cpu and memory, and the dind should be allocated much more – enough to process N pipelines concurrently where N is the value of DRONE_RUNNER_CAPACITY.

and only 3 build a time

The docker runner is meant to be installed on a traditional vm, where you install one runner per vm. When people use the docker runner on kubernetes, they emulate this by installing multiple runners using replicas.

The DRONE_RUNNER_CAPACITY value configures the number of pipelines a single runner can execute concurrently. I recommend setting this value to 1 so that each runner can execute 1 pipeline at a time. Then, to increase concurrency, increase the number of replicas.

kind: Deployment
metadata:
  name: runner
spec:
  replicas: 5
  containers:
  - name: dind
    image: docker:dind
    args:
      - dockerd
      - --storage-driver=overlay2
      - -H unix:///var/run/docker.sock
    securityContext:
      privileged: true
    volumeMounts:
      - name: dockersock
        mountPath: /var/run
    resources
      limits:
        cpu: 2024m
        memory: 2024Mi

  - name: runner
    image: drone/drone-runner-docker:1
    envs:
      - name: DRONE_RPC_HOST
        value: drone.company.com
      - name: DRONE_RPC_PROTO
        value: https
      - name: DRONE_RPC_SECRET
        value: password
      - name: DRONE_RUNNER_CAPACITY
        value: 1
    volumeMounts:
      - name: dockersock
1 Like