-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WSL 中执行失败 #40
Comments
先使用-ng 参数。要支持cuda,需要在cmake的时候加上-DGGML_CUDA=ON |
感谢反馈,后续增加更多提示信息 |
但是我加上了 -DGGML_CUDA=ON 重新build 还是崩溃
|
|
@lovemefan 疑似模型损坏
|
更新了最新的模型吗?modelscope有最新的模型 |
huggingface 下的,huggingface的过时了? |
hugging face 还没来及上传最新的模型 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
我的 wsl2 是经过配置 可以正常运行 whisper 和 llama.cpp 的。可以识别到 cuda。 然后我按 readme build 之后运行,程序崩溃。
gdb
The text was updated successfully, but these errors were encountered: