Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MCS: Supporting Ignition v3 #492

Closed
cgwalters opened this issue Feb 25, 2019 · 12 comments
Closed

MCS: Supporting Ignition v3 #492

cgwalters opened this issue Feb 25, 2019 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@cgwalters
Copy link
Member

cgwalters commented Feb 25, 2019

Related #479

Using "I2" and "I3" for shorthand "Ignition spec v2" and "Ignition spec v3".

We need to support upgrades from 4.0 → 4.1. It feels like we need to simultaneously support I2 and I3 when booting a node. This is related to the MCS (Ignition server). A new master machine boots and the OS speaks I3. Does the MCS have both rendered v2/v3 versions and detect the client version based on e.g. User-Agent or something and serve the right one?

Or does I3 on the machine support reading I2 and translate?

It feels like a grounding assumption here is that we can reliably do this translation; there's concern about ambiguous/broken cases in I2 but let's assume we don't have those cases in our configs, or if we do we error out much earlier than the MCS (e.g. in the render controller).

@runcom
Copy link
Member

runcom commented Mar 18, 2019

This card is now out of scope, we should be updating the whole Ignition dependecy to v3 once it comes out and adjust the code.

@runcom runcom closed this as completed Mar 28, 2019
@cgwalters cgwalters reopened this Apr 1, 2019
@cgwalters
Copy link
Member Author

Based on some discussion with bgilbert I think we need Ignition to tell us whether the host is FCOS or RHCOS too.

@cgwalters
Copy link
Member Author

See also coreos/coreos-assembler#537

@cgwalters
Copy link
Member Author

Also, what makes this issue complex is that we need to support booting from the 4.1.0 bootimages:
openshift/os#381

So the MCS would need to support both. Unless that issue is solved first which would be...nontrivial.

@cgwalters
Copy link
Member Author

Personally on this I think we can handle translating a vast majority of what OpenShift customers are doing today. So we'd add a translator into the MCS.

@yuqi-zhang
Copy link
Contributor

One thing is, even if we have updated bootimage in place, we'd run into a slight problem if we don't handle both: we'd need to simultaneously update bootimage at the point we "upgrade the cluster to spec3" in one encapsulated update (from my understanding).

@cgwalters
Copy link
Member Author

The initial proposal here sketches out a way to handle both old and new bootimages via User-Agent detection in the MCS.

@cgwalters
Copy link
Member Author

Hm actually I forgot today that the installer is generating the initial "pointer config" user data for AWS which is spec 2 by default too.

So one tricky part here is that updating bootimages to a point where they only speak e.g. spec3 and not spec2, we'd also need to update their user data at the same time.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 19, 2020
@yuqi-zhang
Copy link
Contributor

Support has been added as part of the 4.6 release. Closing

@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 22, 2020
@bgilbert
Copy link
Contributor

Closing per #492 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants