You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, is it possible to set priorityClassName for the controller and webhook-server when using the operator installation method? I would like to make sure that the controller is not evicted in the event of resource contention.
{{- if .Values.controller.priorityClassName }}
priorityClassName: "{{ .Values.controller.priorityClassName }}"
{{- end }}
And a similar option for the webhook's manifest.
Also, it might be worth adding this to the jobs' manifest also, so that we could specify caching jobs to be of a lower priority class than regular workloads.
I am aware that avoiding this situation by making good use of resource requests and limits is the best approach. However, I am trying to plan for resource contention events anyway in case of misconfiguration (stranger things have happened).
The text was updated successfully, but these errors were encountered:
@jpuskar : Thanks for raising this issue. I think adding this feature would be highly beneficial to gain control over assigning the right/required priority for kube-fledged and the jobs it creates...I'll have this implemented in the upcoming v0.10.0 release.
senthilrch
changed the title
priorityClass support
Feature: priorityClass support
Oct 19, 2021
Hi, is it possible to set priorityClassName for the controller and webhook-server when using the operator installation method? I would like to make sure that the controller is not evicted in the event of resource contention.
I read through the manifests, and I think what would need to happen is that in spec.template.spec here:
https://github.com/senthilrch/kube-fledged/blob/master/deploy/kubefledged-operator/helm-charts/kubefledged/templates/deployment-controller.yaml#L16
We would need something like this:
And a similar option for the webhook's manifest.
Also, it might be worth adding this to the jobs' manifest also, so that we could specify caching jobs to be of a lower priority class than regular workloads.
I am aware that avoiding this situation by making good use of resource requests and limits is the best approach. However, I am trying to plan for resource contention events anyway in case of misconfiguration (stranger things have happened).
The text was updated successfully, but these errors were encountered: