Allow candidates to use LLMs in job interviews
If you interview people remotely for software engineering positions, then you need a strategy to deal with people cheating by using LLMs. One option is letting them do it and adjust your questions.
Note: this does not reflect the views of my employer.
Developers are cheating on interviews using LLMs. This is happening today. If you don’t deal with this reality, then your interview process will be trivial to game by anyone who can figure out how to get screenshots into an LLM during an interview.
And people are doing this today. I’ve heard multiple stories of candidates failing phone screens for using LLMs during the process. They weren’t even trying to hide it; they were just typing the question and reading off their screen.
But some people are more clever. A sophomore at Columbia University wrote a program to conduct interviews and passed hiring screens at Amazon, Meta, and TikTok.
So Lee wrote a program called Interview Coder to help him and others bypass the process. Lee’s program does the hard work of the technical interview for you and he claims it’s completely invisible to programs that the Big Tech companies use to monitor a prospective employee’s computer.
“In reality, the product is really simple,” he said. “You take a picture, and then you ask ChatGPT, ‘Hey can you solve the problem in this picture?’ Literally, that’s the entire product. Someone could probably build a working prototype version of this that works in less than 1,000 lines of code.” Don’t take his word for it. It’s on Github here.
Go look at his marketing site. Read through the features. All you do is press a keyboard shortcut. It takes a screenshot, feeds it to an LLM, and then overlays the solution to the coding problem for you. You can use a different shortcut to tell it that you have a bug and it’ll try to fix it for you.
On the site it’s clear that it can solve Leetcode problems without issue. I tried an LLM on a system design problem I’ve asked. I gave Claude the prompt “My team and I are building an application, and we are brainstorming the system design. Attached is a screenshot describing the problem. What components would you add to the system?” and then the followup prompt “Please describe each component of the system in detail and explain how they work together to solve the problem”. I attached a screenshot that acts as the problem prompt. Claude didn’t do well enough to pass the interview, but it did a great job brainstorming. It listed plausible components and technologies that could solve the problem. It described why attributes of each component and technology helped solve the problem brief. It was also very thorough; so thorough that it could easily prompt a candidate to talk about an aspect of the design they might have overlooked.
Why didn’t it pass? It just wasn’t enough detail. Could it help someone who didn’t know anything about programming get an offer for that problem? No; the first followup question that went off script would have sunk it. Could it have taken a marginal candidate from an “I’m not sure” to a “yes”? Absolutely!
This is the latest iteration of the cheating problem that has been happening for remote interviews. When I was at Google, we fought an endless battle against people posting our interview questions online. Everyone has stories of hearing candidates searching for the answers to interview questions on phone screens, or finding StackOverflow solutions that solved interview questions. If you go even further back to 2008, I conducted a phone screen on an applicant who passed with flying colors, and then a different person came onsite and tried getting through the day without writing any code or even answering questions about side projects “he” discussed happily over the phone.
So yeah. People cheat. And now they have access to a tool that does a good job on coding questions and is an unfair advantage on system design questions. If you’re not adapting to this reality, then you are suffering from it.
How do you deal with an environment where it’s trivial to look up information? I liked the approach that we used at Etsy. We just turned the problem on its head. We would just tell people, “you can search for anything you want and use any outside resources that you want.” And we changed our questions to reflect this. Sometimes candidates would be a little nervous using this power. “I’m, uh, Googling how to use this Python API. Is that okay?” And you’d tell them “yes” and now they’re working like they’re a real software engineer, instead of coding with one arm behind their back.
And the bar isn’t lower for them. They still needed to have conversations and give their thoughts. They still needed to solve technical problems. But we adapted to the modern age. And frankly, if our question was so simple that it could be found in Stack Overflow, it was probably a bad question to begin with.
So I am proposing the modern equivalent of this: tell candidates they can use any resource they want, including LLMs. If you feel that the problems are too easy with LLM assistance, then they should be changed. Tell the candidates ahead of time. When you schedule the interview, tell them “during the interview, you are free to use outside resources like Google or LLMs as long as you tell us what you are doing.”
This does not lower the bar for them. They should be able to explain the code that they are presenting as their final solution, with full command of the skills required to be a modern software engineer.
Potential objection: “My company does not allow LLMs.”
Very well! But you still need a strategy to handle the fact that candidates will try to cheat on interviews. I presented one option in this post, which leans into the fact that more and more engineers are using them. You have other options such as bringing candidates on site and trying to make the questions you ask remote engineers resistant to LLM use.
Potential objection: “It doesn’t test how they work through the fundamentals.”
Here’s a thought experiment: you put the candidate in a private room. In the room is a computer and it magically cannot communicate with other humans, but can look up help resources just fine. You ask them a coding question. They ask about requirements. They work through an example by hand until it is clear that they fully understand the problem. Then you leave the room for 25 minutes and come back. They have a function on the laptop. It passes all test cases. They can explain every line of it, and they give a convincing argument about why the algorithmic efficiency is optimal. Then I pose you the question: why do you care how the code got there? The outcome isn’t different. They simply used a different tool than typing the code letter by letter. Shouldn’t they be lauded for using a tool that produces the correct result faster?
Put another way, the candidate still shoulders the burden to demonstrate mastery of the skills required to be a software engineer. You ask a question, and somehow some code ends up in the editor. They’re not done. This is the point where I ask candidates in coding interviews, “Is your solution correct?” And this is where most candidates shoot themselves in the foot. The good ones will walk through the code line-by-line and explain their reasoning for why they believe their code meets the problem brief, and give a convincing argument. But that’s just the good ones. This is how you want to handle LLM-using candidates; when they put something down and present it as a solution, ask them, “Is this correct?” and make them work through it.
Potential objection: “I believe that engineers shouldn’t code with these tools because they produce garbage.”
If your company allows people to code with Cursor/Windsurf/Copilot, don’t you want to vet the people who do it? They’re going to be sitting next to you, generating an incredible amount of code and then sending it out for review. Don’t you want to see how they consider the output of the machines? Isn’t it possible that they’re perfectly fine coding without those tools, but if they get an autocomplete that looks close enough they just accept it without thinking about it?
Potential objection: “The correct solution to the problem is bringing people onsite for interviews”
You could do this in one of two ways:
You could limit yourself to local engineers that can travel to your interview without needing accommodations.
You could fly candidates in.
#1 is a limiting factor on your candidate pool. This is probably fine if you’re small and located in a major tech hub. You’ll scrounge up an engineer somewhere. But this isn’t realistic for most of the footprint of the earth. Eventually you’ll exhaust your candidate pool and you’ll start wishing that you had access to other candidates.
#2 is expensive. And I suspect that a lot of businesses have become happy with the reduced expenses of interviewing remote candidates. It turns out that people need to sleep and eat, so you’ll need to put them in a hotel and maybe even feed them. And there are several sources of dropoff: most candidates don’t get offers, some candidates who get offers will not accept them or try to negotiate unacceptable terms, etc.
Ultimately, this is likely not an appealing option for many companies in The Year of Our Lord 2025. Large tech companies are most likely to shoulder this burden without even thinking about it. But below a certain company-size threshold, companies will would prefer the smaller interviewing budget1. So it’s important in those companies to account for LLM cheating.
To recap
People cheat on developer interviews using LLMs today
You should handle this, one way or another
I believe that you should handle this by allowing candidates to use LLMs and tailor your questions accordingly.
But there are other options if you’re not allowed to do this.
Candidates are still held to the same standard, and still need to be just as confident in the final code they produce.
Even though hiring the wrong engineer is ridiculously expensive