Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variable Fold Weights #990

Open
tjburch opened this issue Mar 10, 2025 · 2 comments
Open

Variable Fold Weights #990

tjburch opened this issue Mar 10, 2025 · 2 comments

Comments

@tjburch
Copy link

tjburch commented Mar 10, 2025

I'm currently working through a project where there's time dependency, so I'm using expanding-window folds (e.g. fold 1 is from 2018-2021 with test set 2022, fold 2 is 2018-2022 with test set 2023, etc.). When running a hyperparameter optimization, to my understanding the code will create an average value of the evaluation metric, weighing each of these folds equivalently, even though they have differing number of records. In this case, it would probably make more sense to weight their contribution to the evaluation metric proportional to the number of instances in the given fold.

An example might be:

Fold N Metric
1 100 0.80
2 200 0.85
3 250 0.90
4 300 0.95

We should weigh fold 4 more than fold 1 when deciding the hyperparameters. So, under equal weighting we would get 0.875, whereas under weighting proportional to N would give 0.894.

I'm happy to work on this myself and make a PR, just wanted to confirm it would be a supported feature before I invested time into it.

As far as implementation, I think the straightforward way would be to pass a vector to the various tune functions (tune_bayes or tune_grid) indicating weights per-fold, and possibly make a helper function that creates that vector for you based on fold population, which seems like the most obvious case where you would use this. I could also see some argument for putting it into the rset object, but that might be a bit over-engineered.

@topepo
Copy link
Member

topepo commented Mar 10, 2025

That’s a good idea. I think that estimate_tune_results() is what would be affected.

We should think more about how the implementation would work (in terms of the API). You could propose something (maybe in a fork or draft PR). One idea would be to to add a specialized column to the rset (maybe .metric_weights) that the system can consume when it needs to.

We are on a pretty tight/intense development schedule until about August so I don’t think that anything would be fully implemented until then (but we’ll be happy to discuss and advise).

@tjburch
Copy link
Author

tjburch commented Mar 10, 2025

Sounds good. Not a major rush so if it takes O(months) to get folded in, that's not a problem.

I'll get working on it, and hopefully get as complete of a PR as possible for you all so it doesn't disrupt your schedule.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants