Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Target module QuantLinear() is not supported. Currently, only torch.nn.Linear and Conv1D are supported #9

Open
GameSkySDK opened this issue May 17, 2023 · 3 comments

Comments

@GameSkySDK
Copy link

I have install llmtune according to the readme
and
llmtune generate --model llama-13b-4bit --weights llama-13b-4bit.pt --prompt "the pyramids were built by"
ths works

but load lora faild

llmtune generate --model llama-13b-4bit --weights llama-13b-4bit.pt --adapter alpaca-lora-13b-4bit --instruction "Write a well-thought out recipe for a blueberry lasagna dish." --max-length 500

/usr/local/python3.9.16/lib/python3.9/site-packages/peft/tuners/lora.py:161 in add_adapter │
│ │
│ 158 │ │ │ model_config = self.model.config.to_dict() if hasattr(self.model.config, "to │
│ 159 │ │ │ config = self._prepare_lora_config(config, model_config) │
│ 160 │ │ │ self.peft_config[adapter_name] = config │
│ ❱ 161 │ │ self._find_and_replace(adapter_name) │
│ 162 │ │ if len(self.peft_config) > 1 and self.peft_config[adapter_name].bias != "none": │
│ 163 │ │ │ raise ValueError( │
│ 164 │ │ │ │ "LoraModel supports only 1 adapter with bias. When using multiple adapte │
│ │
│ /usr/local/python3.9.16/lib/python3.9/site-packages/peft/tuners/lora.py:246 in _find_and_replace │
│ │
│ 243 │ │ │ │ │ │ │ │ ) │
│ 244 │ │ │ │ │ │ │ │ kwargs["fan_in_fan_out"] = lora_config.fan_in_fan_out = │
│ 245 │ │ │ │ │ │ else: │
│ ❱ 246 │ │ │ │ │ │ │ raise ValueError( │
│ 247 │ │ │ │ │ │ │ │ f"Target module {target} is not supported. " │
│ 248 │ │ │ │ │ │ │ │ f"Currently, only torch.nn.Linear and Conv1D are sup │
│ 249 │ │ │ │ │ │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Target module QuantLinear() is not supported. Currently, only torch.nn.Linear and
Conv1D are supported.

any one helps

@GameSkySDK
Copy link
Author

the lora is download from
https://huggingface.co/kuleshov/alpaca-lora-13b-4bit

@wyklq
Copy link

wyklq commented May 18, 2023

I met a same issue in doing finetune. It looks like the PEFT module is not compatbile with PyTorch version.

@wyklq
Copy link

wyklq commented May 19, 2023

The root cause of this error is found. It is because of missing "quant_cuda" python module that should be installed with "python setup.py install".
Unfortunately, due to some reason, e.g. in my setup, the script installs "quant_cuda-0.0.0-py3.8-linux-x86_64.egg/" to the site-packages. However, the expected behavior shall install "quant_cuda-0.0.0-py3.8-linux-x86_64.egg" file under dist/ directory to the site-packages.
I suppose, the author may only have tested the "conda" based method, not tested with the raw setup method. As a workaround, we can copy the "egg" file under dist to the site-packages for a manual installation fix of quant_cuda. Then, it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants