Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very impressive!

But I can't help but think if something like this can be done to this extend, I wonder what went wrong/why it's a struggle for OpenCL to unify the two fragmentized communities. While this is very practical and has a significant impact for people who develop GPGPU/AI applications, for the heterogeneous computing community as a whole, relying on/promoting a proprietary interface/API/language to become THE interface to work with different GPUs sounds like bad news.

Can someone educate me on why OpenCL seems to be out of scene in the comments/any of the recent discussions related to this topic?



Opencl gives you the subset of capability that a lot of different companies were confident they could implement. That subset turns out to be intensely annoying to program in - it's just the compiler saying no over and over again.

Or you can compile as freestanding c++ with clang extensions and it works much like a CPU does. Or you can compile as cuda or openmp and most stuff you write actually turns into code, not a semantic error.

Currently cuda holds lead position but it should lose that place because it's horrible to work in (and to a lesser extent because more than one company knows how to make a GPU). Openmp is an interesting alternative - need to be a little careful to get fast code out but lots of things work somewhat intuitively.

Personally, I think raw C++ is going to win out and the many heterogeneous languages will ultimately be dropped as basically a bad idea. But time will tell. Opencl looks very DoA.


If you are going the "open standard" route, SYCL is much more modern than OpenCL and also nicer to work with.


OpenCL isn't nice to use and lacks tons of quality of life features. I wouldn't use it, even if it was double as fast as CUDA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: