-
Notifications
You must be signed in to change notification settings - Fork 314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unit test for ray with an existing cluster and fix nil pointer #3142
Conversation
Signed-off-by: Jason Parraga <[email protected]>
Code Review Agent Run Status
|
Code Review Agent Run #66e24fActionable Suggestions - 2
Review Details
|
Changelist by BitoThis pull request implements the following key changes.
|
head_group_spec = None | ||
if cfg.head_node_config: | ||
if cfg.head_node_config.requests or cfg.head_node_config.limits: | ||
head_pod_template = PodTemplate( | ||
pod_spec=pod_spec_from_resources( | ||
primary_container_name=_RAY_HEAD_CONTAINER_NAME, | ||
requests=cfg.head_node_config.requests, | ||
limits=cfg.head_node_config.limits, | ||
) | ||
) | ||
else: | ||
head_pod_template = cfg.head_node_config.pod_template | ||
|
||
head_group_spec = HeadGroupSpec( | ||
cfg.head_node_config.ray_start_params, | ||
K8sPod.from_pod_template(head_pod_template) if head_pod_template else None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider extracting the head node pod template creation logic into a separate helper method to improve code readability and maintainability. The current implementation has similar logic repeated for both head and worker nodes.
Code suggestion
Check the AI-generated fix before applying
- head_group_spec = None
- if cfg.head_node_config:
- if cfg.head_node_config.requests or cfg.head_node_config.limits:
- head_pod_template = PodTemplate(
- pod_spec=pod_spec_from_resources(
- primary_container_name=_RAY_HEAD_CONTAINER_NAME,
- requests=cfg.head_node_config.requests,
- limits=cfg.head_node_config.limits,
- )
- )
- else:
- head_pod_template = cfg.head_node_config.pod_template
+ def _create_pod_template(node_config, container_name):
+ if node_config.requests or node_config.limits:
+ return PodTemplate(
+ pod_spec=pod_spec_from_resources(
+ primary_container_name=container_name,
+ requests=node_config.requests,
+ limits=node_config.limits
+ )
+ )
+ return node_config.pod_template
+
+ head_group_spec = None
+ if cfg.head_node_config:
+ head_pod_template = _create_pod_template(cfg.head_node_config, _RAY_HEAD_CONTAINER_NAME)
Code Review Run #66e24f
Should Bito avoid suggestions like this for future reviews? (Manage Rules)
- Yes, avoid them
default_img = Image(name="default", fqn="test", tag="tag") | ||
settings = SerializationSettings( | ||
project="proj", | ||
domain="dom", | ||
version="123", | ||
image_config=ImageConfig(default_image=default_img, images=[default_img]), | ||
env={}, | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider moving the SerializationSettings
configuration to a test fixture since it appears to be test configuration that could be reused across multiple tests. This would help reduce code duplication and make the tests more maintainable.
Code suggestion
Check the AI-generated fix before applying
default_img = Image(name="default", fqn="test", tag="tag") | |
settings = SerializationSettings( | |
project="proj", | |
domain="dom", | |
version="123", | |
image_config=ImageConfig(default_image=default_img, images=[default_img]), | |
env={}, | |
) | |
@pytest.fixture(scope="module") | |
def serialization_settings(): | |
default_img = Image(name="default", fqn="test", tag="tag") | |
return SerializationSettings( | |
project="proj", | |
domain="dom", | |
version="123", | |
image_config=ImageConfig(default_image=default_img, images=[default_img]), | |
env={} | |
) |
Code Review Run #66e24f
Should Bito avoid suggestions like this for future reviews? (Manage Rules)
- Yes, avoid them
Tracking issue
Relayed to flyteorg/flyte#5877
In a follow up pull request I'd like to make the
worker_node_config
optional.Why are the changes needed?
When submitting a job to an existing ray cluster you shouldn't need to configure details for an ephemeral cluster you will never use.
What changes were proposed in this pull request?
Adds a unit test and just handles a nil pointer case since head node specs can be
None
How was this patch tested?
Unit tests
Setup process
Screenshots
Check all the applicable boxes
Related PRs
Docs link
Summary by Bito
This PR implements fixes for a nil pointer issue in the Ray plugin's head node configuration and enhances test coverage for existing cluster scenarios. The changes include adding null checks for head_node_config and restructuring head group spec initialization. The PR also adds comprehensive unit testing for Ray task execution with existing cluster configurations.Unit tests added: True
Estimated effort to review (1-5, lower is better): 2