Help a newby out

Hi, I seem to be getting starting with Drone at a very awkward time. Much of what I am trying to do is not working and when I try to figure it out I am finding lots of dead links and out of date information.

I’ve been finding blogs from the last year or two that indicate drone is super easy to use and low learning curve but I am really struggling to figure it out – much of this is because the docs seem to have been deleted or moved somewhere and most of the old blogs are out of date.

I am trying to evaluate drone as the new CI/CD tool for my company and it is looking pretty bad versus Jenkins-X or Concourse at the moment.

Is this going to settle down in the next day or two or are things going to be a disaster for another couple weeks?

Concrete examples:

  1. I am trying to get notifications working and I can’t find documentation – I did find: which indicates to me that in the past drone did have extensive documentation but the current docs for drone has nothing for notifications.

  2. Also, trying to figure out why my docker image won’t build, I found this Docker Plugin broken? which points to a troubleshooting document that doesn’t exist anymore. I can’t figure out how to turn on debugging.

It is true that the project is in a transition period from 0.8 (docs found here) to a new 1.0-rc.1 and we are still porting over the docs. You are welcome to use the 0.8 docs which are more extensive and up-to-date.

it is looking pretty bad versus Jenkins-X or Concourse at the moment… Is this going to settle down in the next day or two or are things going to be a disaster for another couple weeks?

I think these comments are unnecessary and non-constructive. This provides little incentive for anyone to help you. Please keep things constructive going forward.

I sincerely apologize for letting frustration come through and thank you for any help you have time to provide. I would like to be 100% self-sufficient but can’t seem to figure it out myself. I didn’t want to waste anyone’s time getting a trivial set-up working.

However, I will throw myself on the mercy of the community.

Specifically trying to get docker builds working.

I am running drone in a kubernetes cluster hosted on Digital Ocean’s beta kubernetes, drone is deployed using the helm/stable chart with a minimal configuration. SSL is not enabled.

I get a lot of warnings in my drone-drone-agent-dind container log but no errors. – I’m happy to publish the logs to a gist if that makes sense.

I have a theoretically very simple .drone.yml file.

    image: node:carbon
      - npm run ci

    image: plugins/docker
    repo: fdiinc/fusion-frontend
    context: fusion-frontend
    dockerfile: fusion-frontend/Dockerfile
    #username: ""
    #password: ""

  # See:
      token: "...redacted..."
      on_started: true
      on_success: true
      on_failure: true

I created registry creds using the UI.

Registry Address:
Registry User:
Registry PW: pw

If I put in valid credentials to the .drone.yml, the docker step instantly fails:

+ /usr/local/bin/dockerd -g /var/lib/docker
time="2018-11-15T22:57:05Z" level=fatal msg="Error authenticating: exit status 1"

So then I used the drone-cli 0.8.6 version to debug if it was configured correctly. I don’t understand why I need to list a repo, but I guess that is so that it can do the matching and figure out what custom registry applies to the specific project in question?

drone registry ls fdiinc/fusion
?[ ?[0m

I found elsewhere that email is mandatory to be not specified in later versions. So I think it is OK??

So I decided to not put a username / pw into the docker step since I have creds specified (per above with commented out u/p):

Registry Address:
Registry Username: my-registry-username
Registry Password: my-registry-password

Now the docker build runs but it can’t upload

+ /usr/local/bin/dockerd -g /var/lib/docker
Registry credentials not provided. Guest mode enabled.
+ /usr/local/bin/docker version
 Version:	17.12.0-ce
 API version:	1.35
 Go version:	go1.9.2
 Git commit:	c97c6d6
 Built:	Wed Dec 27 20:05:38 2017
 OS/Arch:	linux/amd64

  Version:	17.12.0-ce
  API version:	1.35 (minimum version 1.12)
  Go version:	go1.9.2
  Git commit:	c97c6d6
  Built:	Wed Dec 27 20:12:29 2017
  OS/Arch:	linux/amd64
  Experimental:	false
+ /usr/local/bin/docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 17.12.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
  Profile: default
Kernel Version: 4.9.0-7-amd64
Operating System: Alpine Linux v3.7 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 996.5MiB
Name: 79513850ab01
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Experimental: false
Insecure Registries:
Live Restore Enabled: false

+ /usr/local/bin/docker build --rm=true -f fusion-frontend/Dockerfile -t c3bedede735707b28bd695fac50d2dace27e9ae7 fusion-frontend --pull=true --label org.label-schema.schema-version=1.0 --label --label org.label-schema.vcs-ref=c3bedede735707b28bd695fac50d2dace27e9ae7 --label org.label-schema.vcs-url=
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Sending build context to Docker daemon  756.2kB

Step 1/8 : FROM node:carbon
carbon: Pulling from library/node
61be48634cb9: Pulling fs layer
fa696905a590: Pulling fs layer
b6dd2322bbef: Pulling fs layer
32477089adb4: Pulling fs layer
febe7209ec28: Pulling fs layer
4364cbe57162: Pulling fs layer
ace5c680ff94: Pulling fs layer
4acd6a9b7a48: Pulling fs layer
32477089adb4: Waiting
febe7209ec28: Waiting
4364cbe57162: Waiting
ace5c680ff94: Waiting
4acd6a9b7a48: Waiting
fa696905a590: Verifying Checksum
fa696905a590: Download complete
b6dd2322bbef: Verifying Checksum
b6dd2322bbef: Download complete
61be48634cb9: Verifying Checksum
61be48634cb9: Download complete
febe7209ec28: Verifying Checksum
febe7209ec28: Download complete
4364cbe57162: Verifying Checksum
4364cbe57162: Download complete
61be48634cb9: Pull complete
4acd6a9b7a48: Verifying Checksum
4acd6a9b7a48: Download complete
32477089adb4: Verifying Checksum
32477089adb4: Download complete
ace5c680ff94: Verifying Checksum
ace5c680ff94: Download complete
fa696905a590: Pull complete
b6dd2322bbef: Pull complete
32477089adb4: Pull complete
febe7209ec28: Pull complete
4364cbe57162: Pull complete
ace5c680ff94: Pull complete
4acd6a9b7a48: Pull complete
Digest: sha256:7b65413af120ec5328077775022c78101f103258a1876ec2f83890bce416e896
Status: Downloaded newer image for node:carbon
 ---> 82c0936c46c1
Step 2/8 : WORKDIR /usr/src/app
Removing intermediate container 121a03ff6c17
 ---> 385f84c182ca
Step 3/8 : COPY package*.json ./
 ---> 7e6b5f4a4bce
Step 4/8 : RUN npm install --no-audit
 ---> Running in 2ad994875ed5
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.4 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.4: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

added 1715 packages from 699 contributors in 56.958s
Removing intermediate container 2ad994875ed5
 ---> 09d0f098cdd2
Step 5/8 : COPY . .
 ---> 5059f6ecf192
Step 6/8 : EXPOSE 3001
 ---> Running in 34062371561b
Removing intermediate container 34062371561b
 ---> 92ac071cd390
Step 7/8 : CMD [ "npm", "start" ]
 ---> Running in f2c382bbbd6c
Removing intermediate container f2c382bbbd6c
 ---> cc8f10012028
Step 8/8 : LABEL ""='2018-11-15T23:30:01Z' "org.label-schema.schema-version"='1.0' "org.label-schema.vcs-ref"='c3bedede735707b28bd695fac50d2dace27e9ae7' "org.label-schema.vcs-url"=''
 ---> Running in 4af974d9fbfe
Removing intermediate container 4af974d9fbfe
 ---> 1768b35acec0
Successfully built 1768b35acec0
Successfully tagged c3bedede735707b28bd695fac50d2dace27e9ae7:latest
+ /usr/local/bin/docker tag c3bedede735707b28bd695fac50d2dace27e9ae7 fdiinc/fusion-frontend:latest
+ /usr/local/bin/docker push fdiinc/fusion-frontend:latest
The push refers to repository []
ffd56d126117: Preparing
13f38f77db39: Preparing
d6960b75f647: Preparing
514faa867816: Preparing
44473255e807: Preparing
cf8a9d04d13b: Preparing
fae583f1d4e5: Preparing
9d22e51f0c6d: Preparing
58b8d417193d: Preparing
704a8634956f: Preparing
05f0b6bcfa5c: Preparing
0972cc82b682: Preparing
cf8a9d04d13b: Waiting
fae583f1d4e5: Waiting
9d22e51f0c6d: Waiting
58b8d417193d: Waiting
704a8634956f: Waiting
05f0b6bcfa5c: Waiting
0972cc82b682: Waiting
denied: requested access to the resource is denied
time="2018-11-15T23:32:07Z" level=fatal msg="exit status 1"

Also, the notifications aren’t right. I suspect my pipeline step is borked but that is the exact example given by the flowdock notification instructions here:

It is hard to tell with YAML if this should be under the pipeline like I specifed or if it belongs at the base (zero indent) I tried both and dont’ get anything that I can find either way.

Where should logs indicating notification failure go? I looked at logs for all 3 of the images that are running in my helm chart and none seem to mention notification failures (drone-drone-server, drone-drone-agent, drone-drone-dind)

It looks like two separate issues in your Yaml file. The first thing I see is that you notification syntax is invalid:

      token: "...redacted..."
      on_started: true
      on_success: true
      on_failure: true

If you want to send a notification you can use a notification plugin ( The format of a plugin is almost identical to any other step in the pipeline. Plugins are just docker images, so you need to provide Drone with docker image for your plugin.

Plugins steps are defined in the following format:

  [name of step]:
    image: [name of plugin image]
    [plugin parameters ...]

For example here is a Slack plugin:

    image: plugins/slack  # this is a docker image
    channel: dev


In your Docker plugin it appears you have not specifed the correct repository name, which when using a custom registry, needs to use the fully qualified name. You can see examples and read more about this plugin at here

    image: plugins/docker
-   repo: fdiinc/fusion-frontend
+   repo:

Thanks for the help, unfortunately I already tried that repo tag before and get the same error.

As I suspected, the flowdock notification plugin is out of date or just wrong.

I will attempt to get email working first since that seems to be an up-to-date notification.

My builds are all now failing with a blanket “invalid or missing image” and when I attempt to re-submit it just gives me a grey clock and never starts. – no idea what I did to cause that. I haven’t changed anything in the deployment and have just been sending updated .drone.yml files by pushing changes to github.

The logs in kubernetes pods have not changed.

well technically speaking you did not declare a plugin. Aside from the syntax being invalid (not matching the plugin structure I outlined above) you have omitted the image tag. If you do not provide an image, then you are not providing Drone with a plugin.

In addition, I am not aware of a flowdoc plugin existing. The plugin registry is a good place to find plugins that are generally supported by the community, which all have thorough examples and documentation.

This is probably because you have a step without an image defined (your flowdock step).

Thanks for the help, unfortunately I already tried that repo tag before and get the same error.

based on the limited information provided, this is definitely part of the problem. Including the registry in the repository name is required, per the plugin documentation. See the example with configuring a custom registry.

I also see that you configured registry credentials via the CLI. Per the documentation registry credentials are only used to pull images. They are never exposed to your pipeline or any plugins or steps in the pipeline. Since you are using the Docker plugin you can expect that it will not have access to registry credentials.

If you want to pass docker username and password to the Docker plugin you need to use secrets. You can learn to configure secrets here. The secret documentation includes samples for the Docker plugin.

I would recommend reading over each page in the “contents” section (of the left side menu in the docs). I think this will help provide you with a better understanding of the fundaments such as secrets, registry credentials, plugins, pipeline steps, etc.

Thank you for your time – I definitely am struggling with the documentation. I have been reading it, but it is very difficult to put it into practice when there is no feedback when things go wrong. There are so many pieces that have to be perfect.

Is there a logfile somewhere that lists more information than what I get by using kubectl logs pod image ?

I will go back to zero and start over.

I recommend taking step back …

Is the first step in your pipeline working?

If yes, then we can evaluate the second (docker) step. If this step is failing please provide:

  1. your yaml
  2. a copy of your .docker/config.json file (with the credentials removed). run docker login on your laptop and cat ~/.docker/config.json to get the file contents.
  3. a copy of drone secrets ls <your docker repository>
  4. a copy of your build logs for the docker step

with this information it should be easy to evaluate and get things working. Remove the notification step for now, and I can help you with this after.

Apologies in advance :slight_smile: … I’m back and considerably less ignorant. But I’m still not working fully.

I completely nuked my k8s cluster and rebuilt it from the ground up with working letsencrypt, secrets and ingress configurations.

But I ended up with the same error I had before – once I learned about the ‘debug: true’ flag to the docker plugin, then I found that I was getting a TLS timeout.

So, I created a streamlined pipeline:

    image: centos:7
      # This works
      - curl --max-time 10 --request GET --url ''
      # This sort-of works (similar behaviour to tracepath when in a working environment -- see below)
      - tracepath
      # This times out
      - curl --max-time 10

I can get to random internet services but a service that is hosted inside Digital Ocean but in another droplet can’t be reached, excerpt from the log:

+ curl --max-time 10 --request GET --url ''
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   220  100   220    0     0    782      0 --:--:-- --:--:-- --:--:--   782
{"args":{"isthereanecho":"yes"},"headers":{"x-forwarded-proto":"https","host":"","accept":"*/*","user-agent":"curl/7.29.0","x-forwarded-port":"443"},"url":""}+ tracepath
 1?: [LOCALHOST]                                         pmtu 1500
 1:  gateway                                               0.047ms 
 1:  gateway                                               0.083ms 
 2:  gateway                                               0.057ms pmtu 1450
 2:                                           0.076ms 
 3:                                       1.472ms 
 4:                                        0.900ms 
 5:                                        1.513ms 
 6:                                        6.253ms 
 7:  no reply
 8:  no reply
 9:  no reply


30:  no reply
     Too many hops: pmtu 1450
     Resume: pmtu 1450 
+ curl --max-time 10
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
curl: (28) Operation timed out after 10000 milliseconds with 0 out of 0 bytes received

I did a trivial test where I created my own pod on the same cluster and in that cluster I can get to the registry server:

> kubectl exec -it shell-demo -- /bin/bash
root@shell-demo:/# curl
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 400 Not a Docker request</title>
<body><h2>HTTP ERROR 400</h2>
<p>Problem accessing /. Reason:
<pre>    Not a Docker request</pre></p><hr><a href="">Powered by Jetty:// 9.4.11.v20180605</a><hr/>

root@shell-demo:/# tracepath
 1?: [LOCALHOST]                      pmtu 1450
 1:                                           0.069ms
 1:                                           0.057ms
 2:                                       1.241ms
 3:                                        0.863ms
 4:                                        3.672ms
 5:                                       27.309ms
 6:  no reply
 7:  no reply
 8:  no reply

Because this works in plain-jane Kubernetes but fails when run in drone, I am guessing it has something to do with drone :wink: but I understand if this is too much in the weeds to get much assistance.

Drone creates user-defined networks in Docker which can conflict with Kubernetes networking, depending on how it is configured and what type of DNS configuration is in place.

This can usually be reproduced with the following:

docker network create foo
docker run --network=foo alpine ping

The reason is that Kubernetes is implementing its own DNS settings which are not available to user-defined networks. This is because DNS resolution for user-defined networks is implemented differently (at the Docker level) than the bridge network.

There are two possible solutions to this problem (assuming I have diagnosed the root cause):

  1. do not mount the host machine docker volume. Instead create a dind container in your pod, and connect the agent to the dind instance. This is what the official helm chart does (created by the helm folks, not by me). For whatever reason, this seems to avoid the kubernetes dns resolution issues
  2. do not use kubernetes for your agent machines. Instead use the drone autoscaler and let it manage your agent instances. It will configure them perfectly and will probably save you time and money (and sanity).

I personally do not use Kubernetes so I lack the expertise required to help you troubleshoot this issue. There are plenty of threads in this discourse forum where people discuss kubernetes networking issues. You might have some luck searching through the backlogs, if my above suggestions do not work.

I decided to quit fighting the external routing (or whatever the issue is) and created another private registry on the same kubernetes cluster. This worked the first try! and I can push and pull images from it with no problem from the drone pipeline.

I can now continue with my work – next stop actually deploying the application I just built looped-back to the same cluster.

If I figure out how to get the digital ocean kubernetes networking working I will update this thread, but until then, happy trails.

great glad to hear you got it working. If you ever figure out the kubernetes configuration (or whatever it was) please do let me know. thanks!