From a43bf049ab621e7042c517cc32ab2296cd4c19bd Mon Sep 17 00:00:00 2001 From: dfitzmau Date: Thu, 6 Mar 2025 14:44:15 +0000 Subject: [PATCH] OCPBUGS-55174: Updated the changing-cluster-network-mtu file with NIC info --- .../nw-aws-load-balancer-with-outposts.adoc | 7 +- modules/nw-cluster-mtu-change.adoc | 138 ++++++++---------- networking/changing-cluster-network-mtu.adoc | 3 + 3 files changed, 63 insertions(+), 85 deletions(-) diff --git a/modules/nw-aws-load-balancer-with-outposts.adoc b/modules/nw-aws-load-balancer-with-outposts.adoc index f9220554cf8f..ed67ee27f66e 100644 --- a/modules/nw-aws-load-balancer-with-outposts.adoc +++ b/modules/nw-aws-load-balancer-with-outposts.adoc @@ -27,7 +27,6 @@ You must annotate Ingress resources with the Outpost subnet or the VPC subnet, b * Configure the `Ingress` resource to use a specified subnet: + --- .Example `Ingress` resource configuration [source,yaml] ---- @@ -50,7 +49,5 @@ spec: port: number: 80 ---- -<1> Specifies the subnet to use. -* To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID. -* To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones. --- +<1> Specifies the subnet to use. To use the Application Load Balancer in an Outpost, specify the Outpost subnet ID. To use the Application Load Balancer in the cloud, you must specify at least two subnets in different availability zones. + diff --git a/modules/nw-cluster-mtu-change.adoc b/modules/nw-cluster-mtu-change.adoc index 2e5d6aad1121..424f8b4e8aac 100644 --- a/modules/nw-cluster-mtu-change.adoc +++ b/modules/nw-cluster-mtu-change.adoc @@ -23,8 +23,7 @@ ifndef::outposts[= Changing the cluster network MTU] ifdef::outposts[= Changing the cluster network MTU to support AWS Outposts] ifdef::outposts[] -During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. -You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet. +During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet. endif::outposts[] ifndef::outposts[As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.] @@ -72,60 +71,60 @@ Status: ifndef::local-zone,wavelength-zone,post-aws-zones,outposts[] . Prepare your configuration for the hardware MTU: - -** If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: ++ +.. If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: + [source,text] ---- -dhcp-option-force=26, +dhcp-option-force=26, <1> ---- +<1> Where `` specifies the hardware MTU for the DHCP server to advertise. ++ +.. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly. ++ +.. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. + --- -where: - -``:: Specifies the hardware MTU for the DHCP server to advertise. --- - -** If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly. - -** If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. - ... Find the primary network interface by entering the following command: - + [source,terminal] ---- -$ oc debug node/ -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 +$ oc debug node/ -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 <1> <2> ---- +<1> Where `` specifies the name of a node in your cluster. +<2> Where `ovs-if-phys0` is the primary network interface. For nodes that use multiple NIC bonds, append `bond-sub0` primary NIC bond interface and `bond-sub1` for the secondary NIC bond interface. + --- -where: - -``:: Specifies the name of a node in your cluster. --- - -... Create the following NetworkManager configuration in the `-mtu.conf` file: +... Create the following NetworkManager configuration in the `-mtu.conf` file. + .Example NetworkManager connection configuration [source,ini] ---- [connection--mtu] -match-device=interface-name: -ethernet.mtu= +match-device=interface-name: <1> +ethernet.mtu= <2> ---- +<1> Where `` specifies the primary network interface name. +<2> Where `` specifies the new hardware MTU value. + --- -where: - -``:: Specifies the new hardware MTU value. -``:: Specifies the primary network interface name. --- +[NOTE] +==== +For nodes that use network interface controller (NIC) bonds, specify the primary NIC bond and any secondary NIC Bond in a `-mtu.conf` file. -... Create two `MachineConfig` objects, one for the control plane nodes and another for the worker nodes in your cluster: +.Example NetworkManager connection configuration +[source,ini] +---- +[connection--mtu] +match-device=interface-name: +ethernet.mtu=9000 -.... Create the following Butane config in the `control-plane-interface.bu` file: +[connection--mtu] +match-device=interface-name: +ethernet.mtu=9000 +---- +==== ++ +... Create the following Butane config in the `control-plane-interface.bu` file, which is the `MachineConfig` object for the control plane nodes: + -[source,yaml, subs="attributes+"] +[source,yaml,subs="attributes+"] ---- variant: openshift version: {product-version}.0 @@ -141,11 +140,11 @@ storage: mode: 0600 ---- <1> Specify the NetworkManager connection name for the primary network interface. -<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. - -.... Create the following Butane config in the `worker-interface.bu` file: +<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For NIC bonds, specify the name for the `-mtu.conf` file. + -[source,yaml, subs="attributes+"] +... Create the following Butane config in the `worker-interface.bu` file, which is the `MachineConfig` object for the compute nodes: ++ +[source,yaml,subs="attributes+"] ---- variant: openshift version: {product-version}.0 @@ -161,9 +160,9 @@ storage: mode: 0600 ---- <1> Specify the NetworkManager connection name for the primary network interface. -<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. - -.... Create `MachineConfig` objects from the Butane configs by running the following command: +<2> Specify the local filename for the updated NetworkManager configuration file from the previous step. For ++ +... Create `MachineConfig` objects from the Butane configs by running the following command: + [source,terminal] ---- @@ -183,15 +182,11 @@ endif::local-zone,wavelength-zone,post-aws-zones,outposts[] [source,terminal] ---- $ oc patch Network.operator.openshift.io cluster --type=merge --patch \ - '{"spec": { "migration": { "mtu": { "network": { "from": , "to": } , "machine": { "to" : } } } } }' + '{"spec": { "migration": { "mtu": { "network": { "from": , "to": } , "machine": { "to" : } } } } }' <1> <2> <3> ---- -+ --- -where: - -``:: Specifies the current cluster network MTU value. -``:: Specifies the target MTU for the cluster network. This value is set relative to the value of ``. For OVN-Kubernetes, this value must be `100` less than the value of ``. -``:: Specifies the MTU for the primary network interface on the underlying host network. +<1> Where `` specifies the current cluster network MTU value. +<2> Where `` specifies the target MTU for the cluster network. This value is set relative to the value of +<3> Where `` specifies the MTU for the primary network interface on the underlying host network. For OVN-Kubernetes, this value must be `100` less than the value of ``. -- + ifndef::outposts[] @@ -246,19 +241,16 @@ machineconfiguration.openshift.io/state: Done .. Verify that the following statements are true: + --- * The value of `machineconfiguration.openshift.io/state` field is `Done`. * The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field. --- .. To confirm that the machine config is correct, enter the following command: + [source,terminal] ---- -$ oc get machineconfig -o yaml | grep ExecStart +$ oc get machineconfig -o yaml | grep ExecStart <1> ---- -+ -where `` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field. +<1> Where `` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field. + The machine config must include the following update to the systemd configuration: + @@ -269,7 +261,7 @@ ExecStart=/usr/local/bin/mtu-migration.sh ifndef::local-zone,wavelength-zone,post-aws-zones,outposts[] . Update the underlying network interface MTU value: - ++ ** If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster. + [source,terminal] @@ -278,7 +270,7 @@ $ for manifest in control-plane-interface worker-interface; do oc create -f $manifest.yaml done ---- - ++ ** If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure. . As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: @@ -316,37 +308,28 @@ machineconfiguration.openshift.io/state: Done + Verify that the following statements are true: + --- - * The value of `machineconfiguration.openshift.io/state` field is `Done`. - * The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field. --- +* The value of `machineconfiguration.openshift.io/state` field is `Done`. +* The value of the `machineconfiguration.openshift.io/currentConfig` field is equal to the value of the `machineconfiguration.openshift.io/desiredConfig` field. .. To confirm that the machine config is correct, enter the following command: + [source,terminal] ---- -$ oc get machineconfig -o yaml | grep path: +$ oc get machineconfig -o yaml | grep path: <1> ---- -+ -where `` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field. +<1> Where `` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field. + If the machine config is successfully deployed, the previous output contains the `/etc/NetworkManager/conf.d/99--mtu.conf` file path and the `ExecStart=/usr/local/bin/mtu-migration.sh` line. endif::local-zone,wavelength-zone,post-aws-zones,outposts[] -. To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin: +. To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin. Replace `` with the new cluster network MTU that you specified with ``. + [source,terminal] -+ ---- $ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": }}}}' ---- -+ --- -where: - -``:: Specifies the new cluster network MTU that you specified with ``. --- +Where `` specifies the new cluster network MTU that you specified with ``. . After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: + @@ -389,15 +372,10 @@ $ oc get nodes + [source,terminal] ---- -$ oc debug node/ -- chroot /host ip address show +$ oc debug node/ -- chroot /host ip address show <1> <2> ---- -+ -where: -+ --- -``:: Specifies a node from the output from the previous step. -``:: Specifies the primary network interface name for the node. --- +<1> Where `` specifies a node from the output from the previous step. +<2> Where `` specifies the primary network interface name for the node. + .Example output [source,text] diff --git a/networking/changing-cluster-network-mtu.adoc b/networking/changing-cluster-network-mtu.adoc index 3b9dba394afb..f19b5a392d93 100644 --- a/networking/changing-cluster-network-mtu.adoc +++ b/networking/changing-cluster-network-mtu.adoc @@ -9,7 +9,10 @@ toc::[] [role="_abstract"] As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. +// About the cluster MTU include::modules/nw-cluster-mtu-change-about.adoc[leveloffset=+1] + +// Changing the cluster network MTU or Changing the cluster network MTU to support AWS Outposts include::modules/nw-cluster-mtu-change.adoc[leveloffset=+1] [role="_additional-resources"]