That’s absolutely right, but there’s another issue with the LeetCode-style interview that hasn’t been getting much attention lately, including in this article. My company is hiring right now, and we’ve shifted all of our initial interviews to Zoom, where we include a brief coding task. However, it’s becoming more and more clear that many applicants are using LLMs to produce their code. It’s reached the point where it feels almost impossible to assess whether someone can actually code on their own in these remote settings. On the other hand, it’s much harder to lean on an LLM in a more open-ended, conversational interview without it feeling unnatural. That, I think, is one of the biggest flaws in the current remote coding interview setup.
In my opinion, pair programming and system design discussions are important in the interview process. Those sessions enable hiring teams to assess how the candidate leverages AI tools to build features, debugs, and thinks about solving system-level problems.
However, I'm convinced the future of technical screening for software developers is to do so with code reviews rather than evaluating solely on code production.
The ability to review code is crucial in our industry. You'll be reviewing code often regardless of who (or what) generated it.
We use a coding test that is more like a trivia puzzle quiz about cursed language syntax and bare metal embedded concepts. Bit squashing puzzles, casting structs to unions with byte byte alignments quirks.
On the surface its even more contrived than leetcode. But it has a few benefits;
1. Harder to memorise and prepare for.
2. Harder to ask LLMs.
3. Checks formal schooling or detailed interest in the topic.
learning C with toy projects wont make you perform in this quiz. Spending dedicated effort reading about the inner workings of malloc, RTOS, chipset datasheets, and electronics may.
Many of the questions check understanding beyond the syntax, often one level of abstraction down. For embedded this works nicely. The higher level system design thinking is not applicable to us. We look for people with the mindset and interest to debug the most absure behavior quirks of the hardware when the code misbehave.
But for other fields in think this would falls appart. This particular works for bare metal embedded.
(... i think our interview process may be selecting for ASD as a sideffect)
This is good point. I thought of including this but decided against it at the last minute. I will update the article to briefly mention this, because I also think it is becoming a problem. Thanks for the suggestion.