Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for addition of clos_identifier field to intelRdt configuration #990

Closed
dakshinai opened this issue Sep 11, 2018 · 1 comment
Closed

Comments

@dakshinai
Copy link

Hi,

We are looking at enabling cache allocation for containers based on pod QoS from the kubelet. While the l3CacheSchema field seems sufficient, we will have to let RDT implementation like in runc to create a CLOS (dedicate cache partition) per container. While there can be n number of containers we are limited to around only around 16 CLOS per platform which implies we can associate cache ways to only 16 containers.

With Intel RDT, we were able to showcase performance improvement for NFV deployments through mapping of multiple processes to cache partitions identified by CLOS. These can dedicated - 1 CLOS per set of processes or overlapped/sharing(full overlap) of CLOS's per set of process.

Please check the white paper that explains the same.
https://builders.intel.com/docs/networkbuilders/deterministic_network_functions_virtualization_with_Intel_Resource_Director_Technology.pdf

We request the idea of introducing another field called clos_identifier so that developers can logically group containers based on QoS policy - guaranteed/burstable/best-effort or more granular based on workload type and let containers use cache ways in combinations of dedicated/shared/overlapped with other containers.

This can also avert the situation of having limited CLOS groups and helps to use them efficiently for policy based configurations.

Looking forward to your feedback.

Thanks,
Dakshina Ilangovan
Intel Corporation

@dakshinai
Copy link
Author

Duplicate - #988

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant