-
Notifications
You must be signed in to change notification settings - Fork 876
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Just fitting the meta classifier in the StackingClassifier / Freezing some pre-trained models #383
Comments
Hi there, there's no mlxtend-specific way to save/freeze pre-trained models. However, afaik, the typical scikit-learn pickle approach should work for most: http://scikit-learn.org/stable/modules/model_persistence.html It would be nice to some solution that allows saving a model state in non-binary form. Some time ago, I had planned to build some utility for serializing scikit-learn estimators via JSON. E.g. see here https://cmry.github.io/notes/serialize . This still hovers around on my "someday" to-do list, but I don't think I will be able to get to this in near future. I would welcome PRs though if you like to build something like this -- actually, building it is not the challenge, I guess, but thoroughly checking and verifying it :) |
Thanks for your reply and sharing your other cool projects! What I faced was every base model takes more than 10 hours to train and they were trained on different machines. I saved them with the I am kind of interested in your model to JSON project, but from my previous experience loading a JSON object is really memory intensive (RAM used >>> the actual file size). I am not sure how large can a saved model be. |
Yeah, I think allowing this sort of behavior (fitting the meta-classifier on the existing models but not refitting the base models) would make sense for the It would also be super easy to implement. I actually renamed the old
Yeah, that would be a limitation. But theoretically, it should not occupy much more memory than the loaded model itself, I think. Regarding the loading process, yeah, I think the default JSON reader is not the most efficient one. I remember a benchmark article from a few years ago. I think there were pretty good alternatives though that could then be chosen as a backend, for example. |
Thanks, @rasbt , sorry for the late reply. I read into the code I found the base learner will be trained even the use_clones is Flase. In stacking_classification.py
` I achevied what I want by commenting the code shown above. |
Thanks for the feedback. I think that's not correct, because of the following two lines that come before the lines you referenced:
So, it will fit clones of the base learners but not the baselearners themselves. To allow what you had initially in mind, I think the best way would be adding a |
Is there any way to freeze some models? It is useful to freeze some pre-trained model. For example, training each model on different machines or training serval models and comparing the performance of different base model combinations.
The text was updated successfully, but these errors were encountered: