Broadly speaking, it looks like they are. The implementation of Stable Diffusion doesn't appear to be using all of those features correctly (i.e. device selection fails if you don't have CUDA enabled even though MPS (https://pytorch.org/docs/stable/notes/mps.html) is supported by PyTorch.
Similar goes for quirks of Tensorflow that weren't taken advantage of. That's largely the work that is on-going in the OSX and M1 forks.
Ah, I didn't realize. It's not very obvious what GPU you have in your Macbook, I couldn't actually find out where to find that in my System settings. On Windows it's inside the "Display" settings but on MacOS... where is it? :)
If you look at the substance of the changes being made to support Apple Silicon, they're essentially detecting an M* mac and switching to PyTorch's Metal backend.
So, yeah PyTorch is correctly serving as a 'glue'.
As mentioned in sibling comments, Torch is indeed the glue in this implementation. Other glues are TVM[0] and ONNX[1]
These just cover the neural net though, and there is lots of surrounding code and pre-/post-processing that isn't covered by these systems.
For models on Replicate, we use Docker, packaged with Cog for this stuff.[2] Unfortunately Docker doesn't run natively on Mac, so if we want to use the Mac's GPU, we can't use Docker.
I wish there was a good container system for Mac. Even better if it were something that spanned both Mac and Linux. (Not as far-fetched as it seems... I used to work at Docker and spent a bit of time looking into this...)
I mean, are we going to see X on M1 Mac, for any X now in the future?
Also, weren't torch and tensorflow supposed to be this glue?