Well, the problem is that you can't actually have a "realistic" work problem in the space of an interview. Given that constraint I think it's a reasonable approximation.
Sure you can. At least, if our baseline for "realism" is, "an approximation of what someone will do on the job".
I'd say asking someone to regurgitate some solution for a random leetcode problem is far worse an approximation than asking someone to write, say, a little toy API that does nothing more than retrieve a value out of a set.
See how well they're able to develop in a language of their choosing. Can they get started immediately or do they stumble putting together the first little building blocks?
Treat it like a "real-world example" and make that clear up front. Do they think about logging and metrics? (for the purposes of a toy interview problem, just writing to stdout for both would be sufficient). Do they think about dependency injection? What about unit tests?
Then follow it up by asking them to modify a bit of their logic. ("okay, we've got it returning a matching value from the set if it exists - what if we wanted to add in wildcard support at the end of the incoming string?").
Tons of very real things to consider, even in the constraints of a simple toy problem.
As someone who works in Big Tech, I would much rather have people on my team who think about maintainability, debuggability, monitoring, what can go wrong, etc. etc. (and have shown during an interview they're capable of writing some trivial business logic around that) than someone who absolutely nailed mirroring a binary tree and solving the longest common sub-sequence problem.
Have you ever actually been asked to implement a sorting algorithm or balance a tree from memory? I've done a lot of whiteboard interviews, including at some of the Big Ns, and I can't say I ever experienced this, despite those exercises being used as a kind of metonym for whiteboarding.
It was mostly a joke, I'm a DS so I get arbitrary take homes rather than leetcode.
The more general point is that the algorithmic approaches from leetcode problems have not a lot of relation to what most programmers do all day, and as such, are less useful as a work-sample test.
Doing a take-home where you fix some bugs would probably work better (i.e. more correlated with outcomes) than leetcode interviews.
Well I think this is an important point, because the tests, IME, are asking you to apply the concepts to solve a toy problem, not to actually implement stuff like sorting algorithms from memory. The latter is indeed unrealistic, but the former is something my job actually does entail.