For example, in some cases, we compute a hash based on all of the dependencies for a single binary in a package, and if an image tagged with that hash exists, we know nothing has changed and therefore don’t need to build, publish, or deploy. This is especially useful in mono-repos with many potential images.
This is a really interesting approach, I had not thought about this before.
I can understand why you might want to do this in the Pipeline because you will have all the required data available (all files, dependencies, etc). You could replicate this in a config plugin, but at that point you are replicating a lot of what Drone is doing in the pipeline.
Read all files of type .build.yml" in the repo. These are a custom YAML files that the configuration plugin understands, and there’s generally one per “project” in a repo. E.g.,
As for the error code, i only picked 64 because I believe that is in the range of “user defined” error codes
ah sorry, I didn’t see your suggestion. I tested an exit code > 256 (out of range) and the shell interprets this as a 0 exit code, which definitely does not work (so ignore my suggestion).
Maybe we could also come up with some sort of
when clause plugin. I have been trying to think of how we use the kubernetes-style approach to allow more types of custom objects. I have not thought much about this, but we might be able to do some interesting things:
- name: build
- go build
- go test