David Parmenter reports in Venture Beat:
The usual tech hiring process is failing companies. The whiteboard test — a go-to tech interview prop — is unapologetically biased and filled with arbitrary limitations. With too much emphasis on candidates’ pedigrees and performance under pressure, we’re ruling out great candidates. We all want to hire people who “get it, but we’re often creating a mono-culture of people just like us. A healthy culture is rich, diverse, and growing all the time.
Phred needed a job, and we were hiring, woohoo! Because we knew him through a mutual friend, the interview process began casually, with a “get to know you” cup of coffee. After the initial meeting, he submitted his resume — and it was anything but ordinary. Rather than using Microsoft Word or PDF format, Phred’s resume was in YAML, a text format that techies use for configuring computer environments. This sent a strong nerd-cred vibe that piqued our interest.
Instead of giving Phred an arduous series of coding exercises at a whiteboard, as is all too common in my industry, we sent him home with a programming assignment. The task was to select and solve a few specific math problems in code. In Phred’s words:
“It was not simply a test of my ability to write programming solutions to math problems. Rather, the goal was to have me demonstrate my development process from end to end:
– How I work with problem requirements and decide what to do;
– How I use version control;
– How I architect and develop a solution;
– How I test and document my work.”
“Afterwards, I was invited by the team to join them for a formal code review. This was a great experience because it let me scope out the personalities and expertise of my future team (and vice versa), and enabled us all to get a feel for exactly what it would be like to work together.”
Phred found his interview experience to be both reasonable and rigorous but also, relaxed and humanizing. And today, I am happy to report that Phred is a valued member of the team.
As a technical manager, I’ve interviewed hundreds of people over the years. And, I’ve been on at least two or three dozen interviews myself. While my experience roughly follows industry norms, my opinion is that the usual tech hiring process is failing companies. The whiteboard test — a go-to tech interview prop — is unapologetically biased and filled with arbitrary limitations. With too much emphasis on candidates’ pedigrees and performance under pressure baked into the process, we’re likely ruling out great candidates while letting in less qualified ones!
Is tech hiring broken? More than a bit! But we can fix it. At the very least, it may be time for an attitude adjustment.
As an industry, we’ve seen several iterations of the Tech Interview:
Interview 1.0. If you’re over 30, you’ve almost definitely experienced this. The typical HR department screens tons of resumes. Hiring managers sift through those that get past the buzzword filter, pick a few that look interesting, and conduct phone interviews. Successful candidates troop in for a day of conversations with engineers from the team, leading to one offer, or sometimes zero offers. In interviews done poorly, candidates are asked the same questions over and over, with savvier job seekers using information gained in the first hour to game the system to their advantage in the second hour. If interviews are done well, you can build a good — maybe even great team.
Interview 2.0. I like to think of Interview 2.0 as “Google-style” interviews. Well documented on the Internet, these are fairly common at most tech companies. The initial screening is pretty much the same as Interview 1.0, with a few key differences. When candidates arrive for the onsite interview, they’re presented with up to five intense whiteboard sessions — solving problems in code or pseudo-code, drawing up architecture pictures, etc. While Interview 2.0 is manifestly better than the 1.0 iteration, it’s still not perfect. For example, many solid candidates may think and problem-solve in a way that doesn’t work as well when they’re being judged in real time. On the other end of the spectrum, a boot camper may learn to “crack the interview” using something like Gayle Laakmann’s excellent book. (Then again, anyone who has managed to solve all 150 questions in Laakmann’s book is likely a much better programmer than when they finished bootcamp!)
Interview 2.5. This is the interview style that I use most often. Screened candidates are given a take home test that’s either open book (look up algorithms on Google) or closed book (honor system, show your work). I prefer open book because it closely resembles what real-world work is like. Problems are relatively hard (think 8 queens problem, optimally pack cubes in a bigger volume, build a web application that does X, interactive card game, minimum spanning tree, etc.) but can usually be solved in a few hours. Candidates are given a specified time frame — 24, 48, or 72 hours to complete the test — and told to solve the problem as if they were working on a production system and then share the solution when it’s complete.
A few things I love about this: I can find out if the code actually works, is well coded, is written by someone I want to know more about. Some developers can write code that “sings.” We want those developers!
Typically, if you like what you see in a completed solution, the candidate is brought in for an Interactive Session to discuss the solution in depth (verify work). You can even propose new requirements. For instance, we might suggest that the input data does not fit in memory or that we want to run on a Raspberry Pi or use 1,000 computers in parallel, etc.
The benefit with Interview 2.5 is clear: You get a good sense of what it’s like to work with the candidate. Are they having fun? Are you? Would you enjoy seeing them again, every day? While bias still exists, there’s more opportunity for different candidates to shine through as worthy members of your team. It’s also possible to bring candidates in to join a group design discussion around a fictional problem or an existing day-to-day problem. I’ve done this multiple times, and it really works!
Interview 3.0. I’m not totally convinced that a truly blind interview process is possible given the collaborative nature of technical work. However, the process can definitely be “blinder.” For example, we could
Whenever there is bias and arbitrary limitations in the interview process, we’re likely reducing the talent pool. In whiteboard sessions, we often restrict the use of Google and expect a linear thought process, neither of which occur naturally in the real world of work. Programmers have been rallying on social media against this grueling practice called “whiteboard algorithm hazing.” The current scenario favors candidates who perform well under pressure, which often leaves amazing talent behind.
- Anonymize resumes to protect gender identity.
- Review take-home tests blindly (without context).
- Create more diverse interview panels.
- Broaden recruiting pools to include liberal arts schools and non-university programs.
A really neat approach is offered by CodeFights, a San Francisco-based company that is flipping the conventional recruiting process on its head. The Silicon Valley startup matches job-seeking programmers with top employers anonymously. Coders battle bots and each other, earning points on a series of challenges. Those who perform well are given the opportunity to interview with top recruiters. Aside from providing a space to hone skills, CodeFights lets talented coders prove their competencies without race, gender, or background playing a role — a fresh approach for the tech industry, where diversity issues and unconscious bias in recruiting, interviewing, and hiring has been problematic. The cultural lens that CodeFights attempts to sidestep is commonplace for many tech interviewers.
We all want to hire people who “get it,” who “fit in.” But what we’re often doing is creating a mono-culture of people just like us. By contrast, a healthy culture is rich, diverse, and growing — changing to become more so all the time. Removing cultural fit as a criterion will likely get us better candidates. And while a “no jerks” filter is a must, being more open to differences of experience, personality, background, gender, and ethnicity just makes sense.
0 comments:
Post a Comment