-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent predictions (confidence values) with multiple runs #15
Comments
More insights into the problem: (1) The problem is not with my pretrained model but whenever the ferret loads any pretrained model, it leads to completely different explanations and prediction values. (2) Just reloading ferret everytime gives consistent results. So, it seems like some randomness is being introduced during the time the ferret loads the model. |
Hi, thank you for reaching out.
About the randomicity of results, I've been trying Is there any model checkpoint that you can share so that I can try it with yours? |
Hi @kgarg8, any news here? :) |
Yes, I do have updates. I think the problem was I used let's say The way I resolved it was - do the same 'X' steps first and then try to work on ferret model. This made the ferret prediction consistent with my original model evaluation. |
Description
Describe what you were trying to get done.
I am loading ferret's explainer with my pretrained model (for classification task on nlp dataset).
Tell us what happened, what went wrong, and what you expected to happen.
I am loading ferret's explainer with my pretrained model (for classification task on nlp dataset) but the problems are:
(1) every run of the explainer is giving me different confidence labels
(2) [Could be consequence of 1st problem] the explainer's prediction is often inconsistent with the pretrained model's prediction
What I Did
Output (illustrates problem-1 of different confidence values for different runs)
The text was updated successfully, but these errors were encountered: