You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are looking at enabling cache allocation for containers based on pod QoS from the kubelet. While the l3CacheSchema field seems sufficient, we will have to let RDT implementation like in runc to create a CLOS (dedicate cache partition) per container. While there can be n number of containers we are limited to around only around 16 CLOS per platform which implies we can associate cache ways to only 16 containers.
With Intel RDT, we were able to showcase performance improvement for NFV deployments through mapping of multiple processes to cache partitions identified by CLOS. These can dedicated - 1 CLOS per set of processes or overlapped/sharing(full overlap) of CLOS's per set of process.
We request the idea of introducing another field called clos_identifier so that developers can logically group containers based on QoS policy - guaranteed/burstable/best-effort or more granular based on workload type and let containers use cache ways in combinations of dedicated/shared/overlapped with other containers.
This can also avert the situation of having limited CLOS groups and helps to use them efficiently for policy based configurations.
Looking forward to your feedback.
Thanks,
Dakshina Ilangovan
Intel Corporation
The text was updated successfully, but these errors were encountered:
Hi,
We are looking at enabling cache allocation for containers based on pod QoS from the kubelet. While the l3CacheSchema field seems sufficient, we will have to let RDT implementation like in runc to create a CLOS (dedicate cache partition) per container. While there can be n number of containers we are limited to around only around 16 CLOS per platform which implies we can associate cache ways to only 16 containers.
With Intel RDT, we were able to showcase performance improvement for NFV deployments through mapping of multiple processes to cache partitions identified by CLOS. These can dedicated - 1 CLOS per set of processes or overlapped/sharing(full overlap) of CLOS's per set of process.
Please check the white paper that explains the same.
https://builders.intel.com/docs/networkbuilders/deterministic_network_functions_virtualization_with_Intel_Resource_Director_Technology.pdf
We request the idea of introducing another field called clos_identifier so that developers can logically group containers based on QoS policy - guaranteed/burstable/best-effort or more granular based on workload type and let containers use cache ways in combinations of dedicated/shared/overlapped with other containers.
This can also avert the situation of having limited CLOS groups and helps to use them efficiently for policy based configurations.
Looking forward to your feedback.
Thanks,
Dakshina Ilangovan
Intel Corporation
The text was updated successfully, but these errors were encountered: