Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This isn't a brain teaser, they actually correlate with on-the-job performance. When I worked at Google I looked up their internal reports on the subject and they showed that the score people got on these algorithm interviews correlated pretty well with their performance ratings after they got hired. In other words the group of people who got hired even though they had low scores performed significantly worse than the people who got hired with excellent scores.

Edit: This applies for senior engineers as well.



General mental ability is regularly affirmed as one of the most predictive factors for successful hires, it's just that it has a questionable legal history.


I've been doing this for a long time, and I've never seen any kind of correlation between how well someone solves silly algorithm puzzles in a 45 minute window and any kind of real-world performance. Also, there's no apparent a prior reason why this would be the case. It might be that the way performance rating is done at Google is constructed so as to be correlated with silly brain teaser performance.


Maybe for Google-type/scale problems, but I'd say that the engineers I've worked with and hired for "normal" software jobs (e.g. here's a boring business domain, make it CRUD, make an API, make a UI for it) - there is almost an inverse correlation between being a CS/algorithm genius and being happy/successful at these everyday roles.

From observation, many super sharp CS people very frequently want to write systems from scratch, get bored, then move on. It's really hard to pull them back to use off-the-shelf tech, don't over optimize, etc. Many of the best folks I've worked with in these roles are not CS majors at all (EE, ECE, etc) and this algo screening would filter them out.


I wouldn't be surprised if the people Google rejects because they lacked technical skills are much better than the people Google rejects because they lacked soft skills. Accumulate that for every company paying more than your company and the people with great algorithm skills left will likely be social misfits in some way.


> It's really hard to pull them back to use off-the-shelf tech, don't over optimize, etc.

There is an argument to be made here about career and skill growth. I left my previous job which was basically business-logic-to-CRUD-in-a-complex-domain simply because I stopped growing there. The moment you stop growing in software industry is the moment your career dies, at least that's my perception at this time given my personal experiences.


This is my feeling as well, I think it's the difference between a computer scientist and an engineer. You just need to know who you are and what roles you prefer.


Problem: You need to test if people can do A. No test exists to test this directly.

Solution: Make a test to test if people can do B. Studies correlate ability to do B with ability to do A.

Complication: Lose lots of candidates who can do A but choose not to get good at B.

Solution: Pay tons of recruiters to pound the pavement and turn over every rock and make sure every tech worker in the world applies.


Personally, I don't think correlation is good enough. I never doubted that people who are good with a single given abstract problem are correlated with people who are good candidates. So yes, it non-arbitrarily identifies good candidates, but it also arbitrarily eliminates a significant chunk of good candidates.


Yeah, it eliminates a lot of good candidates. If you can't match Google on benefits then you should not copy their hiring strategy and get their rejects, instead try to get the diamonds their rough process misses by doing something different.


Eliminating good candidates is not in and of itself a bad thing if it a) decreases your risk of a bad hire, and b) you want to optimize to decrease the risk of a bad hire at the expense of eliminating some potentially great ones.


>but it also arbitrarily eliminates a significant chunk of good candidates.

I think they know that and they're okay with it because they have the volume and cash. I think they routinely tell you to try again.


How do they control for bias in that a person with a high or low score may just be perceived to do better or worse, may be given better or less opportunities, etc. if their scores are known to the managers, etc? In other words it’s a correlation but is is explanatory?


Why would managers get employees' scores on random interview questions? Why would that affect project assignments months or years later? Why would we assume folks at Google in charge of creating effective interview processes wouldn't be capable of the most basic statistical analysis, by making sure they had a large enough sample, enough performance reviews for each employee, etc?


Because it is common for managers to be very engaged in the hiring process.


Not at Google, they don't even decide who is going to be your manager until after these steps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: