Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd rather see someone implemented glue that allows you to run arbitrary (deep learning) code on any platform.

I mean, are we going to see X on M1 Mac, for any X now in the future?

Also, weren't torch and tensorflow supposed to be this glue?



Broadly speaking, it looks like they are. The implementation of Stable Diffusion doesn't appear to be using all of those features correctly (i.e. device selection fails if you don't have CUDA enabled even though MPS (https://pytorch.org/docs/stable/notes/mps.html) is supported by PyTorch.

Similar goes for quirks of Tensorflow that weren't taken advantage of. That's largely the work that is on-going in the OSX and M1 forks.


I got stuck on this roadblock, couldn’t get CUDA to work on my Mac, was very confusing


That's because CUDA is only for Nvidia GPUs and Apple doesn't support Nvidia GPUs, it has its own now.


Didn’t apple stop supporting Nvidia cards like 5 years ago? How could it be confusing that Cuda wouldn’t run?


Ah, I didn't realize. It's not very obvious what GPU you have in your Macbook, I couldn't actually find out where to find that in my System settings. On Windows it's inside the "Display" settings but on MacOS... where is it? :)


lol presumably the OP didn't know that... hence the confusion.


    (base)   stable-diffusion git:(main) conda env create -f environment.yaml
    Collecting package metadata (repodata.json): done
    Solving environment: failed
    
    ResolvePackageNotFound:
      - cudatoolkit=11.3
oh i was following the github fork readme, there is a special macos blog post


link?


If you look at the substance of the changes being made to support Apple Silicon, they're essentially detecting an M* mac and switching to PyTorch's Metal backend.

So, yeah PyTorch is correctly serving as a 'glue'.

https://github.com/CompVis/stable-diffusion/commit/0763d366e...


As mentioned in sibling comments, Torch is indeed the glue in this implementation. Other glues are TVM[0] and ONNX[1]

These just cover the neural net though, and there is lots of surrounding code and pre-/post-processing that isn't covered by these systems.

For models on Replicate, we use Docker, packaged with Cog for this stuff.[2] Unfortunately Docker doesn't run natively on Mac, so if we want to use the Mac's GPU, we can't use Docker.

I wish there was a good container system for Mac. Even better if it were something that spanned both Mac and Linux. (Not as far-fetched as it seems... I used to work at Docker and spent a bit of time looking into this...)

[0] https://tvm.apache.org/ [1] https://onnx.ai/ [2] https://github.com/replicate/cog




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: