My Issues with Coding Interviews
If you’ve ever interviewed for a job as a coder in any capacity (software engineer, data scientist, etc.), then chances are that you had to do a coding assessment somewhere in the process. And that assessment might have been a variation of the questions you see on HackerRank or Cracking the Coding Interview.
I’m just going to come out and say it; I really, really, really don’t like these types of interviews. I’ll go into the reasons why here, and highlight alternatives which I would consider to be more equitable means of hiring technical talent.
Don’t get me wrong; this is not to say that all companies are doing coding assessments the wrong way. I’ve had a number of interviews where the coding assessments felt like an accurate representation of what I’d be doing as a data scientist/engineer.
To clarify, the coding interviews I take issue with are the ones that come from the aforementioned sources. I will refer to these as “whiteboarding interviews”.
My reasons are as follows:
1: They’re inconsistent
Difficulty and subject of whiteboarding interviews vary wildly from interview to interview, even within the same company. One whiteboarding question can be solved in 5 minutes whereas one might take up the full time.
interviewing.io found that performance on practice interviews was prone to a lot of variation, and that a user passing/failing one interview was only weakly indicative of performance on the next interview that came their way.
According to Triplebyte, there’s no agreement in the industry in terms of what constitutes a “good engineer”.
2: They aren’t representative of job performance
Whiteboarding interviews tend to assess topics that rarely come up in day-to-day coding work. The famous tweet from Max Howell is the prime example of this:
I have worked as a data scientist for almost 5 years now. While I may have played around with recursion and graph traversal algorithms (breadth/depth first search), I have not encountered a problem in the field which required the intimate knowledge of those topics to the level that is expected in whiteboarding interviews.
3: They’re counterproductive
According to Gitlab, “a recent computer science graduate will outperform a more senior candidate with a lot of valuable experience.” It’s problematic and arguably ageist to me that job interview performance would be negatively correlated with job experience.
A study showed that whiteboarding interviews were more a test of performance anxiety (or lack thereof) than of technical skill. Let me ask you: How many developers do you know have some form of social anxiety? It’s almost a badge of honor in this industry.
Here’s some things that would make for better coding interviews:
1: Take-home assessments
Take-home assessments partially absolve candidates of the performance anxiety that’s mentioned above. Depending on the design, they’re also more realistic by not only simulating a real engineering problem, but also in allowing candidates access to tools that most coders have on the job but not in the interview room (ex: Google).
These are not without their downsides, admittedly. Take-home assessments require a larger time commitment to conduct, which is challenging for candidates with children, for instance.
2: Prepare candidates in advance
Some might argue that whiteboarding interviews are a test of a candidate’s ability to quickly study and understand obscure topics. In that case, why not set candidates up for success by telling them the topic in question?
I once had a coding interview where I was explicitly told beforehand what specific technical topic would be covered. Not only was the question less of a “gotcha”, but it also felt relevant to day-to-day data science work. And they did it in a way where the question was still challenging and didn’t feel like they outright leaked the answer key.
Parting Thoughts
Whiteboarding interviews set the bar unfairly high. But when the W in question is a tech unicorn with 2x the pay* of their competitors and hundreds of applications per job listing, then they can set the bar as high as they want. (that’s capitalism for ya).
* For instance, see Facebook vs Accenture. At the time of this writing, an E4 engineer at Facebook has total compensation in the ballpark of $265K, while a similarly ranked engineer at Accenture only reports ~$95K.
There’s an entire industry dedicated to not only teaching how to handle these types of interview questions, but also selling interview materials to companies. To me, it reeks of the Shirky principle.
I won’t lie: there is a part of me that’s just biased and bitter because I’ve struggled with these interviews in the past. But there’s a lot of people who would agree with me.