Skip to content

Commit

Permalink
OCPBUGS-55174: Updated the changing-cluster-network-mtu file with NIC…
Browse files Browse the repository at this point in the history
… info
  • Loading branch information
dfitzmau committed Mar 6, 2025
1 parent 5e58cf3 commit a43bf04
Show file tree
Hide file tree
Showing 3 changed files with 63 additions and 85 deletions.
7 changes: 2 additions & 5 deletions modules/nw-aws-load-balancer-with-outposts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@ You must annotate Ingress resources with the Outpost subnet or the VPC subnet, b

* Configure the `Ingress` resource to use a specified subnet:
+
--
.Example `Ingress` resource configuration
[source,yaml]
----
Expand All @@ -50,7 +49,5 @@ spec:
port:
number: 80
----
<1> Specifies the subnet to use.
* To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID.
* To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.
--
<1> Specifies the subnet to use. To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID. To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones.
138 changes: 58 additions & 80 deletions modules/nw-cluster-mtu-change.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,7 @@ ifndef::outposts[= Changing the cluster network MTU]
ifdef::outposts[= Changing the cluster network MTU to support AWS Outposts]

ifdef::outposts[]
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster.
You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
endif::outposts[]

ifndef::outposts[As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.]
Expand Down Expand Up @@ -72,60 +71,60 @@ Status:

ifndef::local-zone,wavelength-zone,post-aws-zones,outposts[]
. Prepare your configuration for the hardware MTU:

** If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
+
.. If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
+
[source,text]
----
dhcp-option-force=26,<mtu>
dhcp-option-force=26,<mtu> <1>
----
<1> Where `<mtu>` specifies the hardware MTU for the DHCP server to advertise.
+
.. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
+
.. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
+
--
where:

`<mtu>`:: Specifies the hardware MTU for the DHCP server to advertise.
--

** If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.

** If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.

... Find the primary network interface by entering the following command:

+
[source,terminal]
----
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 <1> <2>
----
<1> Where `<node_name>` specifies the name of a node in your cluster.
<2> Where `ovs-if-phys0` is the primary network interface. For nodes that use multiple NIC bonds, append `bond-sub0` primary NIC bond interface and `bond-sub1` for the secondary NIC bond interface.
+
--
where:

`<node_name>`:: Specifies the name of a node in your cluster.
--

... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file:
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file.
+
.Example NetworkManager connection configuration
[source,ini]
----
[connection-<interface>-mtu]
match-device=interface-name:<interface>
ethernet.mtu=<mtu>
match-device=interface-name:<interface> <1>
ethernet.mtu=<mtu> <2>
----
<1> Where `<interface>` specifies the primary network interface name.
<2> Where `<mtu>` specifies the new hardware MTU value.
+
--
where:

`<mtu>`:: Specifies the new hardware MTU value.
`<interface>`:: Specifies the primary network interface name.
--
[NOTE]
====
For nodes that use network interface controller (NIC) bonds, specify the primary NIC bond and any secondary NIC Bond in a `<bond-interface>-mtu.conf` file.

... Create two `MachineConfig` objects, one for the control plane nodes and another for the worker nodes in your cluster:
.Example NetworkManager connection configuration
[source,ini]
----
[connection-<primary-NIC-bond-interface>-mtu]
match-device=interface-name:<bond-iface-name>
ethernet.mtu=9000
.... Create the following Butane config in the `control-plane-interface.bu` file:
[connection-<secondary-NIC-bond-interface>-mtu]
match-device=interface-name:<bond-secondary-iface-name>
ethernet.mtu=9000
----
====
+
... Create the following Butane config in the `control-plane-interface.bu` file, which is the `MachineConfig` object for the control plane nodes:
+
[source,yaml, subs="attributes+"]
[source,yaml,subs="attributes+"]
----
variant: openshift
version: {product-version}.0
Expand All @@ -141,11 +140,11 @@ storage:
mode: 0600
----
<1> Specify the NetworkManager connection name for the primary network interface.
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.

.... Create the following Butane config in the `worker-interface.bu` file:
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For NIC bonds, specify the name for the `<bond-interface>-mtu.conf` file.
+
[source,yaml, subs="attributes+"]
... Create the following Butane config in the `worker-interface.bu` file, which is the `MachineConfig` object for the compute nodes:
+
[source,yaml,subs="attributes+"]
----
variant: openshift
version: {product-version}.0
Expand All @@ -161,9 +160,9 @@ storage:
mode: 0600
----
<1> Specify the NetworkManager connection name for the primary network interface.
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step.

.... Create `MachineConfig` objects from the Butane configs by running the following command:
<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For
+
... Create `MachineConfig` objects from the Butane configs by running the following command:
+
[source,terminal]
----
Expand All @@ -183,15 +182,11 @@ endif::local-zone,wavelength-zone,post-aws-zones,outposts[]
[source,terminal]
----
$ oc patch Network.operator.openshift.io cluster --type=merge --patch \
'{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }'
'{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' <1> <2> <3>
----
+
--
where:

`<overlay_from>`:: Specifies the current cluster network MTU value.
`<overlay_to>`:: Specifies the target MTU for the cluster network. This value is set relative to the value of `<machine_to>`. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
`<machine_to>`:: Specifies the MTU for the primary network interface on the underlying host network.
<1> Where `<overlay_from>` specifies the current cluster network MTU value.
<2> Where `<overlay_to>` specifies the target MTU for the cluster network. This value is set relative to the value of
<3> Where `<machine_to>` specifies the MTU for the primary network interface on the underlying host network. For OVN-Kubernetes, this value must be `100` less than the value of `<machine_to>`.
--
+
ifndef::outposts[]
Expand Down Expand Up @@ -246,19 +241,16 @@ machineconfiguration.openshift.io/state: Done

.. Verify that the following statements are true:
+
--
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
--
.. To confirm that the machine config is correct, enter the following command:
+
[source,terminal]
----
$ oc get machineconfig <config_name> -o yaml | grep ExecStart
$ oc get machineconfig <config_name> -o yaml | grep ExecStart <1>
----
+
where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
<1> Where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
+
The machine config must include the following update to the systemd configuration:
+
Expand All @@ -269,7 +261,7 @@ ExecStart=/usr/local/bin/mtu-migration.sh

ifndef::local-zone,wavelength-zone,post-aws-zones,outposts[]
. Update the underlying network interface MTU value:

+
** If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster.
+
[source,terminal]
Expand All @@ -278,7 +270,7 @@ $ for manifest in control-plane-interface worker-interface; do
oc create -f $manifest.yaml
done
----

+
** If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure.

. As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
Expand Down Expand Up @@ -316,37 +308,28 @@ machineconfiguration.openshift.io/state: Done
+
Verify that the following statements are true:
+
--
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.
--
* The value of `machineconfiguration.openshift.io/state` field is `Done`.
* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field.

.. To confirm that the machine config is correct, enter the following command:
+
[source,terminal]
----
$ oc get machineconfig <config_name> -o yaml | grep path:
$ oc get machineconfig <config_name> -o yaml | grep path: <1>
----
+
where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
<1> Where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
+
If the machine config is successfully deployed, the previous output contains the `/etc/NetworkManager/conf.d/99-<interface>-mtu.conf` file path and the `ExecStart=/usr/local/bin/mtu-migration.sh` line.
endif::local-zone,wavelength-zone,post-aws-zones,outposts[]

. To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin:
. To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin. Replace `<mtu>` with the new cluster network MTU that you specified with `<overlay_to>`.
+
[source,terminal]
+
----
$ oc patch Network.operator.openshift.io cluster --type=merge --patch \
'{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}'
----
+
--
where:

`<mtu>`:: Specifies the new cluster network MTU that you specified with `<overlay_to>`.
--
Where `<mtu>` specifies the new cluster network MTU that you specified with `<overlay_to>`.

. After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
+
Expand Down Expand Up @@ -389,15 +372,10 @@ $ oc get nodes
+
[source,terminal]
----
$ oc debug node/<node> -- chroot /host ip address show <interface>
$ oc debug node/<node> -- chroot /host ip address show <interface> <1> <2>
----
+
where:
+
--
`<node>`:: Specifies a node from the output from the previous step.
`<interface>`:: Specifies the primary network interface name for the node.
--
<1> Where `<node>` specifies a node from the output from the previous step.
<2> Where `<interface>` specifies the primary network interface name for the node.
+
.Example output
[source,text]
Expand Down
3 changes: 3 additions & 0 deletions networking/changing-cluster-network-mtu.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,10 @@ toc::[]
[role="_abstract"]
As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change.

// About the cluster MTU
include::modules/nw-cluster-mtu-change-about.adoc[leveloffset=+1]

// Changing the cluster network MTU or Changing the cluster network MTU to support AWS Outposts
include::modules/nw-cluster-mtu-change.adoc[leveloffset=+1]

[role="_additional-resources"]
Expand Down

0 comments on commit a43bf04

Please sign in to comment.