Contributing to Drone for Kubernetes


This guide is a work in progress. I will continue to update based on comments and questions from the community. Thanks for reading!

The Drone Kubernetes Runtime takes a .drone.yml configuration and translates it into Kubernetes native pods, secrets, and services. So what does this mean if you are already running Drone on Kubernetes today? It means no more build agents. No more mounting the host machine Docker socket or running Docker in Docker. If you are into buzzwords, it means Drone is now Kubernetes Native.

The goal of this thread is to help you get up and running with the Drone development tools, so that you can test and contribute to the Drone Kubernetes Runtime. If you want to contribute, you should use the command line tools described below for testing. Compiling the full Drone server for testing purposes is not necessary, and quite frankly, will only make your life more difficult.

Source Code

The Drone Kubernetes Runtime can be found here. You can clone the repository to your gopath and build using the instructions found here. The kubernetes implementation can be found in this package. If you want to contribute to the Kubernetes Runtime, this is the only source code you should need to edit.

Creating a Pipeline

The Kubernetes Runtime takes a Pipeine json as its input format. The json file is an intermediate representation. You can construct the json file by hand, but it is easier to create a yaml file and use the drone-yaml to compile the yaml to the intermediate representation.

Here is an example yaml:

kind: pipeline
name: default

- name: greeting
  image: alpine
  - echo hello
  - echo world

Install the drone-yaml binary:

go get

Once the binary is installed, compile the yaml:

drone-yaml compile > .drone.json

Executing the Pipeline

In the previous section we compiled the yaml configuration to an intermediate json representation. Now we can use the drone-runtime tool to execute the Pipeline. Install the drone-runtime binary to get started.

go get

Once the binary is installed, execute the Pipeline with the following command:

drone-runtime \
  --kube-node=$NODE \
  --kube-config=$HOME/.kube/config .drone.json

Note that we need to provide the path to our Kubernetes configuration file and the name of the node on which the Pipeline should run. This is required for testing purposes only, although once we support persistent volume claims (issue #19) the node will not be required.

Testing Volumes

Every Drone pipeline has a shared volume, called the workspace, where you code is cloned. This ensures all steps have access to your source code, as well as any files or artifacts that are created. You can test this capability with the following yaml (compile the yaml and run).

kind: pipeline
name: default

- name: foo
  image: alpine
  - touch hello.txt

- name: bar
  image: alpine
  - ls hello.txt

Testing Services

If you define a services section in your yaml configuration, it is mapped to a Kubernetes service. You can test this capability with the following yaml (compile the yaml and run).

kind: pipeline
name: default

- name: test
  image: redis
  - sleep 5
  - redis-cli -h $REDIS_SERVICE_HOST ping

- name: redis
  image: redis
  - 6379

Testing Secrets

Secrets are mapped to Kubernetes secrets. You can test this capability with the below yaml configuration file. Note that you need to pass your secrets to the yaml in the compile step:

drone-yaml compile \
  --secret=username:janecitize \
  --secret=password:correct-horse-battery-staple > .drone.json
kind: pipeline
name: default

- name: test
  image: alpine
      from_secret: password
      from_secret: username    
  - env


You can use kubectl and the kubernetes dashboard to debug a running pipeline. We also provide a simple utility that converts the intermediate json representation to a Kubernetes yaml file. This can be useful when you want a better understanding of how Drone is creating objects.

drone-runtime --kube-debug .drone.json

How Can I Help?

I am glad you asked. This is just an initial implementation and there is plenty of room for improvement. Please see our issue tracker for open issues; we are looking for volunteers. If you have any questions you can post in this thread, create a new topic, or join our chatroom.

Steps failing in Kubernetes Native
Drone on k8s: services aren't accessable at there hostname
Drone on K8S - FailedScheduling
Permission denied when trying to execute script
Connecting issues for services (1.0.0-rc.2, k8s)
pinned globally #2

#3 needs updating with the vars to enable the K8s integration


I am also curious as to how you think the situation of K8s running an older version of Docker which doesn’t support features like multi-stage docker builds should be handled?


I am also curious as to how you think the situation of K8s running an older version of Docker which doesn’t support features like multi-stage docker builds should be handled?

Drone does not directly handle docker build. This would be handled by a plugin. There are a few different plugins available for building Docker images, including plugins/docker and kubeciio/img (based on genuinetools/img). These plugins are docker-in-docker so the host machine Docker version does not matter.


@bradrydzewski is there a way to bundle up my changes and deploy on a cluster?

Don’t get me wrong drone-runtime allows me to develop things and this document is super nice of you to make. But nothing seems to be enough for me :smiley: No seriously, it would help our adoption. Thanks.


You can also use the banzaicloud/drone-kaniko plugin, which can build multistage Dockerfile, but doesn’t use docker at all.


I can’t figure out how to limit Kubernetes jobs concurrency to 1. Currently Drone build jobs are running at the same time and it is crashing the server. How I can do this?


I believe the implementation keeps spinning up jobs without any consideration. At least I didn’t find/look for any knobs that would do that.

In my mind resource limits should be defined for each job (either explicit or default) as that is the mechanism in Kubernetes to keep the cluster healthy.

In this PR resource limits were implemented and I was able to use it successfully:

In this issue I ask for a default limit, so the behavior you see should not happen even if you don’t define the limits:

In this issue I describe that when I use limits the jobs don’t trigger autoscaling and often deadlock :

My conclusion is that without relying just a littlebit more on the Kubernetes scheduling (thus requiring volume support) the implementation is not ready yet.