We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't need kwargs['load_in_4bit'] = True when use quantization_config
kwargs['load_in_4bit'] = True
quantization_config
LLaVA/llava/model/builder.py
Lines 34 to 40 in c121f04
The text was updated successfully, but these errors were encountered:
Happens with transformers 4.44.2 . The ValueError was introduced here in transformers (huggingface/transformers#21579) (instead of a slow deprecation)
transformers 4.44.2
transformers
This was merged in huggingface/transformers@3668ec1 (part of v4.27.0 and onward)
and since this project requires transformers==4.37.2, there is no reason to keep passing deprecated booleans which trigger this error.
transformers==4.37.2
Sorry, something went wrong.
No branches or pull requests
Don't need
kwargs['load_in_4bit'] = True
when usequantization_config
LLaVA/llava/model/builder.py
Lines 34 to 40 in c121f04
The text was updated successfully, but these errors were encountered: