Their head of AI alignment clearly has no idea on how to go on alignment, as you can see here, during this 30 min rambling into nothing, on the subject.
Going just from published research, it seems Lilian Weng focus is LLM agents, safety, and alignment, focusing on how models are used, guided, and evaluated, not how they’re built.
It seems Horace He focuses on deep learning systems and compiler optimization, improving the performance of frameworks like PyTorch.
While both are clearly highly capable, and maybe capable of focusing on other areas, again just from published papers, neither seems to have published work on core LLM architectures or foundational model training, that could help bring about a scientific advance on the performance of current models.
Their contributions seem to be on enhancing usability and efficiency, not the underlying design or scaling of modern LLMs.
If that is the core team, I would be worried if they have the researchers capable of producing a breakthrough worth of the billions committed. But maybe that is why they are still hiring?