Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgoCD does not update PVC managed by StatefulSet #6666

Open
3 tasks done
diodel-agr opened this issue Jul 8, 2021 · 8 comments
Open
3 tasks done

ArgoCD does not update PVC managed by StatefulSet #6666

diodel-agr opened this issue Jul 8, 2021 · 8 comments
Labels
bug Something isn't working component:core Syncing, diffing, cluster state cache duplicate This issue or pull request already exists type:bug

Comments

@diodel-agr
Copy link

Checklist:

  • I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • I've included steps to reproduce the bug.
  • I've pasted the output of argocd version.

Describe the bug

I have encountered an issue with ArgoCD while updating the size of a Persistent Volume Claim managed by a StatefulSet.
In the StatefulSet manifest, the volumeClaimTemplates storage size gets updated, the pods are restarted, however, the PVC remains to the previous size. ArgoCD does not report any sync failures.

The workaround to this issue is to manually update the PVC storage information, then kubernetes will start updating the actual storage volume to the requested size.

To Reproduce

The issue was encountered on an EKS cluster version 1.18.9, ArgoCD version 1.7.11, on a prometheus-stack application version 16.10.0 (chart link). The storage class is an AWS EBS, dynamically provisioned, type gp2, filesystem type ext4.

  1. Launch ArgoCD and the prometheus application. In values.yaml the configuration is something like this:
storageSpec: 
        volumeClaimTemplate:
          spec:
            storageClassName: gp2
            accessModes: ["ReadWriteOnce"]
            resources:
              requests:
                storage: 150Gi
  1. Then update the storage to other value. The StatefulSet gets updated, however the PVC remains at the initial size.

The structure of the StatefulSet is shown here:
image

Expected behavior

I expect that ArgoCD will update the volume size automatically.

Version

argocd: v1.7.11+97401f9
  BuildDate: 2020-12-10T02:39:33Z
  GitCommit: 97401f9bb9ec9db350d2c8bf6563f5a43b71b2c6
  GitTreeState: clean
  GoVersion: go1.14.12
  Compiler: gc
  Platform: linux/amd64

Logs
No relevant logs were found

@diodel-agr diodel-agr added the bug Something isn't working label Jul 8, 2021
@jannfis
Copy link
Member

jannfis commented Jul 8, 2021

Dupe of #4693

@jannfis jannfis added the duplicate This issue or pull request already exists label Jul 8, 2021
@kubermihe
Copy link

I have the same behaviour on my side. When i try to increase the disk from loki-stack StatefulSet, the size changes in the sts if it's deleted in "NON-CASCADING (ORPHAN) DELETE" mode. Then the size is correct in the StatefulSet, but not in the PVC, which is managed by the StatefulSet.

If the change is done manually in ArgoCD by editing the PVC, the change is reverted directly by ArgoCD and the disk holds the old space. So i think this is a misbehaviour in ArgoCD (maybe a bug?).

ArgoCD is running in version 2.4.8 (https://artifacthub.io/packages/helm/argo/argo-cd/4.10.4) in our K8s-cluster (v1.23.6)

@sathieu
Copy link
Contributor

sathieu commented Apr 19, 2023

According to kubernetes/kubernetes#68737 (comment), using patchworks, i.e:

kubectl patch pvc <name> -p '{"spec": {"resources": {"requests": {"storage": "<size>"}}}}

But how to patch with argoCD?

NB: we tested with serverSideApply, but this this not working either.

@js-score
Copy link

js-score commented May 4, 2023

im in the same boat has anyone been able to fix the issue?

@dinukarajapaksha
Copy link

Facing the same issue on the below version. Is there an update on this?

{
    "Version": "v2.6.5+60104ac",
    "BuildDate": "2023-03-14T14:19:45Z",
    "GitCommit": "60104aca6faec73d10d57cb38245d47f4fb3146b",
    "GitTreeState": "clean",
    "GoVersion": "go1.18.10",
    "Compiler": "gc",
    "Platform": "linux/amd64",
    "KustomizeVersion": "v4.5.7 2022-08-02T16:35:54Z",
    "HelmVersion": "v3.10.3+g835b733",
    "KubectlVersion": "v0.24.2",
    "JsonnetVersion": "v0.19.1"
}

@diegocejasprieto
Copy link

Having the same issue.

@sonic-sw
Copy link

I came across this issue this morning too in an postgres DB. In the end I did the following steps.

I am not sure if this works for all cases, in my case it was in combination with bitnami's helm chart for postgress, which sets the pvc to retain on delete/scale by default. Use with caution!

  1. Disable auto-sync by commenting all the following lines in application manifest
  # Sync policy
  # syncPolicy:
  #   automated:
  #     prune: true
  #     selfHeal: true
  #     allowEmpty: false
  #   syncOptions:
  #     - Validate=true
  #     - CreateNamespace=true
  #     - PrunePropagationPolicy=foreground
  #     - PruneLast=true
  #   managedNamespaceMetadata:
  #     labels:
  #       type: data-stores
  1. Deleted sts in argoCD IMPORTANTE: non-cascading!
    image

  2. Edit pvc in GCP from 8 > 10Gi size (allow resize must be enabled on storage class)

  3. Deleted running pod of DB

  4. Resync ArgoCD

  5. Re-enabled auto sync

@alexmt alexmt added component:core Syncing, diffing, cluster state cache type:bug labels Jul 10, 2024
@alekhrycaiko
Copy link

alekhrycaiko commented Jan 29, 2025

Is there any follow up to resolving or mitigating against this issue? This issue has persisted for a few years now which is concerning as it's a consistent operational pain. I'm not entirely clear on whether this is Argo or my EBS CSI driver having issues and what next steps can be taken here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component:core Syncing, diffing, cluster state cache duplicate This issue or pull request already exists type:bug
Projects
None yet
Development

No branches or pull requests

10 participants