Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Scale deployment to zero policy not working as expected #1245

Open
2 tasks done
VegardEikenes opened this issue Feb 24, 2025 · 0 comments
Open
2 tasks done

[Bug] Scale deployment to zero policy not working as expected #1245

VegardEikenes opened this issue Feb 24, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@VegardEikenes
Copy link

VegardEikenes commented Feb 24, 2025

Kyverno Version

1.12

Kubernetes Version

1.29

Kubernetes Platform

AKS

Description

I have tried to implement this policy:
https://kyverno.io/policies/other/scale-deployment-zero/scale-deployment-zero/

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: scale-deployment-zero
  annotations:
    policies.kyverno.io/title: Scale Deployment to Zero
spec:
  rules:
  - name: annotate-deployment-rule
    match:
      any:
      - resources:
          kinds:
          - v1/Pod.status
    preconditions:
      all:
      - key: "{{request.operation || 'BACKGROUND'}}"
        operator: Equals
        value: UPDATE
      - key: "{{ sum(request.object.status.containerStatuses[*].restartCount || [`0`]) }}"
        operator: GreaterThan
        value: 1
    context:
    - name: rsname
      variable:
        jmesPath: "request.object.metadata.ownerReferences[0].name"
        default: ''
    - name: deploymentname
      apiCall:
        urlPath: "/apis/apps/v1/namespaces/{{request.namespace}}/replicasets"
        jmesPath: "items[?metadata.name=='{{rsname}}'].metadata.ownerReferences[0].name | [0]"
    mutate:
      targets:
        - apiVersion: apps/v1
          kind: Deployment
          name: "{{deploymentname}}"
          namespace: "{{request.namespace}}"
      patchStrategicMerge:
        metadata:
          annotations:
            sre.corp.org/troubleshooting-needed: "true"
        spec:
          replicas: 0

It seems to work just fine when the restarting pods has 1 container. However, if the restarting pod has more than 1 container, it fails.

Steps to reproduce

  1. Implement the policy.
  2. Ensure that a pod has more than 1 container, and that it is restarting.

Expected behavior

The policy should scale down the deployments with restarting pods to 0, also when the pod has more than 1 container.

Screenshots

No response

Kyverno logs

ERR github.com/kyverno/kyverno/pkg/background/mutate/mutate.go:180 > error="failed to mutate existing resource, rule annotate-deployment-rule, response error: failed to evaluate preconditions: failed to substitute variables in condition key: failed to resolve sum(request.object.status.containerStatuses[*].restartCount || [0]) at path : JMESPath query failed: JMESPath function 'sum': invalid operand"

Slack discussion

No response

Troubleshooting

  • I have read and followed the documentation AND the troubleshooting guide.
  • I have searched other issues in this repository and mine is not recorded.
@VegardEikenes VegardEikenes added the bug Something isn't working label Feb 24, 2025
@VegardEikenes VegardEikenes marked this as a duplicate of #1219 Feb 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant