-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
M1 "cuda" Support ? #131
Comments
Same issue here. I have yet been able to find a solution by my self, hoping this soon will be resolved. |
Yeah, it's weird this is CUDA only, but over at deep-daze, it can be NVIDIA or AMD. |
Pytorch now is compatible with M1 (see https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/). Are there any plans on big-sleep support it as well ot it will be only cuda? |
M1 support is definitely not ready yet. I tried it, and could run some performance tests, but cuda ("mps") did not work correctly especially in the backward direction. |
Does M1 even have Cuda? I don't see them listed on any specification. I
thouthg they had their own thing, called ALU Cores, not CUDA Cores.
…On Mon, May 23, 2022 at 12:48 AM Hannu Töyrylä ***@***.***> wrote:
M1 support is definitely not ready yet. I tried it, and could run some
performance tests, but cuda ("mps") did not work correctly especially in
the backward direction.
—
Reply to this email directly, view it on GitHub
<#131 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIZEZJ4Q6RWX4YGVHJZQJ3VLMZ3JANCNFSM5TA4O7IQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Sincerely,
*Jordan S. C. Thompson *
|
Pytorch calls it mps, uses "metal shaders", see the link in rrgoni's comment. |
Yeah, I am aware of MPS. I just mean it isn't CUDA support. Apple GPUs use
ALU cores. Pytorch seems to be using wrapper support via MPS, which as
Pytorch states is specific to each Apple GPU, which may be the issues
people are encountering, the MPS support wasn't done for that GPU.
…On Mon, May 23, 2022, 9:44 AM Hannu Töyrylä ***@***.***> wrote:
Does M1 even have Cuda? I don't see them listed on any specification. I
thouthg they had their own thing, called ALU Cores, not CUDA Cores.
Pytorch calls it mps, uses "metal shaders", see the link in rrgoni's
comment.
—
Reply to this email directly, view it on GitHub
<#131 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIZEZINBGR5MDLJB3CLDXLVLOYVPANCNFSM5TA4O7IQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
OK. I'd be surprised though if people at pytorch would not know what they are doing, implementing accelerated tensor operations using mps. Seen from an applications developer's point of view, what is available a the moment is not even alpha. Could not get a simple loss.backwards to work correctly. It runs, but does not converge like it does on cpu. Quite soon one also gets into "not implemented area". And finally, the speed improvement over cpu was not much. PS. Now I got what you said... that there are several incompatible M1 GPU implementations? |
At the same time, we have multi-million dollar software developers that
can't properly support modern CPUs in general because of their limited
scope in ability to test. I mean, look at the game industry and computer
support. Even if you follow spec, unless you can test it on that hardware,
there may be literally game-breaking or software breaking bugs.
Edit: A great example of this is Mortal Kombat X. It supported like a quarter of modern CPUs at launch. In fact, to this day, I still can't launch the game, because it's also not forward compatible with most new CPUs like Ryzens. All three PCs I've had since it's launch were not compatible. Unlike my friends on generic high profile Intels, I had AMD APU, AMD CPU, and Dual Xeons (server processors). Now I have a Rzyen 5, and still can't launch it. I'm sure if I had a FX chipset, for AMD, I bet it'd work.
…On Tue, May 24, 2022 at 9:28 AM Hannu Töyrylä ***@***.***> wrote:
OK. I'd be surprised though if people at pytorch would not know what they
are doing, implementing accelerated tensor operations using mps.
Seen from an applications developer's point of view, what is available a
the moment is not even alpha. Could not get a simple loss.backwards to work
correctly. It runs, but does not converge like it does on cpu. Quite soon
one also gets into "not implemented area". And finally, the speed
improvement over cpu was not much.
—
Reply to this email directly, view it on GitHub
<#131 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIZEZJGOECDLPIULNL76PLVLT7UTANCNFSM5TA4O7IQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Sincerely,
*Jordan S. C. Thompson *
|
I'll get my coat. I was hoping I would be able to get a M1 Studio to replace one of my linux boxes, but maybe not worth while expecting much. Edit: Anyhow... I only wanted to comment that the M1 support is by no means ready. Without knowing the details, already a quick look at their issue tracker appeared to me to show that. |
Hi all,
I'm new and this might be a newbie question...
Is there a way to emulate cuda on mac m1 gpu's?
Or use big-sleep at a CPU level?
I have Tensorflow installed and running in my m1 mac.
The text was updated successfully, but these errors were encountered: