-
Notifications
You must be signed in to change notification settings - Fork 40.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HA-Kubernetes mangles data when running mixed apiserver versions (upgrades) #46073
Comments
Justin, can you please add a sig label. We are trying to standarize on sig rather than area labels. Or if there no identifiable SIG for it, say so. |
Sorry @erictune - fixed! |
Apply is kubectl. |
And I consider this a feature request rather than a bug. cc @kubernetes/sig-api-machinery-feature-requests @kubernetes/sig-cli-feature-requests |
This is basically a manifestation of the problem discussed in #4855. New fields shouldn't be exposed until all components are upgraded and there is no risk of rollback. We can't store unknown fields for reasons discussed in #30819. However, all components, but especially kubelet, really should be using patch to write status. |
Edit: this "crossed in the mail" with bgrant's previous reply, but I think the patch issue still holds. So I think there are two things here:
I am happy to split out the kubectl issue if desired. The API issue is more problematic though. |
@justinsb Looking forward to seeing a design doc for this when it's ready. |
@mml I think sig-apimachinery should own this; it's too fundamental an issue I believe. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
fyi, with the guidance in https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md#alpha-field-in-existing-api-version (and the sweeps done in #72169 and #72651), n-1 skew among API servers will no longer drop data from beta+ fields supported in version |
We saw an interesting edge case during a user's 1.5 -> 1.6 upgrade. The installation tool (kops) did
kubectl apply
with a 1.6 manifest, but the 1.6 specific fields were removed from the deployment at some stage - in particular the tolerations.The
kubectl.kubernetes.io/last-applied-configuration
reflected the 1.6 fields, but the toleration fields were missing from the spec itself.I hypothesize that a 1.5 kube-controller-manager was the leader, and it did e.g. an UpdateStatus at some stage during the update.
How can we prevent this happening? The ideas I've had are that we could either report the lowest API version across the cluster, or we could use a protobuf unknown-fields approach so that we don't mangle extra fields. Neither of those are great options.
The text was updated successfully, but these errors were encountered: