Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add levels to plot benchmarks #3765

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

alejoe91
Copy link
Member

Replacement for #3301

This adds the option to aggregate levels when plotting benchmark results.

@alejoe91 alejoe91 added the benchmarks Related to benchmarks module label Mar 12, 2025
@alejoe91 alejoe91 requested review from samuelgarcia and yger March 12, 2025 16:35
Copy link
Collaborator

@yger yger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, but I'll need to play a bit with it to understand exactly the levels. Each time a function in plot_tools will be added, it will need to handle case keys for grouping then, am I right?

Comment on lines 395 to 401
def compute_gt_unit_locations(self, dataset_keys=None, **method_kwargs):
if dataset_keys is None:
dataset_keys = list(self.datasets.keys())
for dataset_key in dataset_keys:
sorting_analyzer = self.get_sorting_analyzer(dataset_key=dataset_key)
sorting_analyzer.compute("unit_locations", **method_kwargs)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a special focus on gt_unit_locations? I mean such a function could be great, to compute quantities on the main analyzer, but maybe it could be more generic and allow any extension to be computed? Or otherwise, let's discuss why only gt_unit_locations

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because the the plot_performance_vs_depth_and_snr uses it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed in last commit to have a more general compute_analyzer_extension and get_gt_unit_locations (which uses the gt_unit_location property if available)

@alejoe91
Copy link
Member Author

@yger let me share some examples. You can check the complex study in the test, which has 8 datasets and 16 cases (tdc2 and sc2).

Now, if you run the plotting functions without levels, you would get 16 cases, which is very messy... instead, let's say I'm only interested in the effect of ["sorter_name"] and ["sorter_name", "processing"] (the last level could be probe, or in my case a different stream/hybrid case).

Here's what you get:

plot_run_times

image

image

plot_performances_ordered

image

image

performance VS SNR

image

performance VS SNR+Depth

image

performance comparison

image

@yger
Copy link
Collaborator

yger commented Mar 13, 2025

Nice, thanks for sharing the example, this is great. And can you provide a custom palette for your cases? Let say for example I want SC2 to only be orange and TDC2 to only be blue (with different alpha depending on cases). Is it doable ? I'll read it, but this seems very practical to ease the lecture of the graphs

@alejoe91
Copy link
Member Author

Nice, thanks for sharing the example, this is great. And can you provide a custom palette for your cases? Let say for example I want SC2 to only be orange and TDC2 to only be blue (with different alpha depending on cases). Is it doable ? I'll read it, but this seems very practical to ease the lecture of the graphs

Let me add some customization there

@alejoe91
Copy link
Member Author

@yger with the last commit you can specify colors:

Examples:

plot_run_times(
    study,
    levels_to_keep=["sorter_name", "processing"], 
    colors={"no-preprocess": "b", "preprocess": "r"}
)

image

or:

plot_performances_ordered(
    study, levels_to_keep=["sorter_name", "processing"], 
    colors={
    ("tdc2", "no-preprocess"): "b", ("tdc2", "preprocess"): "c", ("sc2", "no-preprocess"): "r", ("sc2", "preprocess"): "y"}, 
    orientation="horizontal"
)

image

If you specify wrong colors based on the levels, you'll get this:

plot_run_times(study, levels_to_keep=["sorter_name", "processing"], colors={"no-preprocess": "b", "tdc2": "r"})

Error:

AssertionError: colors must have a color for each palette key: ['no-preprocess' 'preprocess']

@yger
Copy link
Collaborator

yger commented Mar 13, 2025

This looks good to me. It might be out of the scope of this PR, but all these plotting functions in benchmarks are working for Study object. Would it be worth making them also work for single Comparisons? Not sure this is crucial, just a thought

@alejoe91
Copy link
Member Author

You mean for a single Benchmark?

@yger
Copy link
Collaborator

yger commented Mar 13, 2025

Yes, I am not using these syntax anymore and maybe this will be deprecated, but for a while we had functions such as plot_performances(comp), where comp was a single Benchmark (GTComparison for example ?). But i should read the code again, maybe this is now a Study and all these plots in widgets will still work

@alejoe91
Copy link
Member Author

Ok, extended this also to plot_unit_counts:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmarks Related to benchmarks module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants