Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, certainly. One industry use-case that comes to mind if Baidu; white-paper link below [1]. Pretty much all the large model developers distribute their model training across hardware in some way, using a blend of GPU/TPU/FPGA accelerators across multiple CPU nodes. Moving all the data around is expensive though, in both power consumption and time, which is why NVIDIA's new system would be of interest.

[1] http://research.baidu.com/Public/uploads/5e76df66c467b.pdf



This is fantastic, thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: