-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extra ai tools error #189
Comments
Hi, I will try to test and check these during the weekend. I have myself rx5700 but I can do the build for rx5500 and try to check if I can force my rx5700 to be detected as a rx5500. Have you been able to do any other tests, like for example running the example: /opt/rocm_sdk_612/docs/examples/pytorch/pytorch_cpu_vs_gpu_simple_benchmark_jupyter.sh |
In the mean time, I tested VLLM just in case with the fresh build AMD's 7700S GPU and Ubuntu 24.04. Here is the successful output for first test. ` WARNING 01-09 18:41:01 rocm.py:13] INFO 01-09 18:42:09 model_runner.py:1025] Loading model weights took 0.6178 GB Prompt: "Tell me about the quote 'Float like a butterfly, sting like a bee'" Prompt: 'Will Lauri Markkanen be traded to Golden State Warriors?' Prompt: 'Tell me about the story of Paavo Nurmi Statues in Swedish Vasa war ship?' Prompt: 'Who is Diego Maradona?' |
With stable diffusion I am getting same error and no-picture unless I slide the "sampling steps" to be smaller than 20. |
I've managed to get vllm working after forcing the reinstall with the --no-deps argument and the test now works correctly. |
Thanks for confirming. I guess it's fine to close this now. Unless you want to add a some more documentation to README.md in case somebody else run to same problem? |
Hello, I have built the extra ai tools as stated in the readme, but I get an error when running 2 of them, vllm and stable diffusion-webui.
I built vllm and installed, but everytime I am going to use it I get
ModuleNotFoundError: No module named 'vllm'
Everytime I am generating an image using stable diffusion webui I get the same error:
pydantic_core._pydantic_core.ValidationError:3 validation errors for ProgressResponse
It seems that the image generates, but it is not shown on the webpage.
I am using rocm_sdk_builder 6.1.2 with an rx5500m (gfx1012). Llama cpp works great.
The text was updated successfully, but these errors were encountered: