I’ve noticed some unusual behaviour when setting Resource Requests for Pipeline Steps. In our runner we provide the following two env vars:
DRONE_RESOURCE_REQUEST_CPU: 200m DRONE_RESOURCE_REQUEST_MEMORY: 100MiB
I assumed (perhaps wrongly) that this is the default CPU & Memory Request applied to each individual Step, in the absence of Resource Requests being defined in the Pipeline. Instead what it seems, is that this is the Maximum Requests which are applied to the entire Pipeline (which is not overridable). So every Step in the Pipeline ends up with:
Requests: cpu: 1m memory: 4Mi
But the first Step (usually the git clone) will get whatever is remaining from the total after the above values were subtracted. So if you have 4 more Steps after the git clone, your first clone Step would reserve:
Requests: cpu: 196m memory: 84Mi
If this is intentional behaviour then the docs should probably be updated with more of an explanation that setting these vars on the runner means it’s the total reserved resources for the entire Pod that is scheduled (overriding any specification for individual Steps in a users Pipeline), split across each Container.
I’ll test removing both env variables to make sure each step can define their own Requests. It would still be nice to set a minimum applied for all Steps (at the runner level), so that they are given a buffer of reserved resources and reduce the likelihood of over-allocation of Jobs on a single Node (another problem we hit).