#51 - Hiring - getting to 'no'
Plus: Make boring plans; Starting in management isn't easy; Risk adjusted backlogs
Hi!
The good news is that my team, and the larger organization I’m a part of, is going to be growing substantially in the coming year. That’s also the bad news. We have to hire.
Hiring team members is a time-consuming, exhausting job - and probably rightly so, since it’s the most important thing we do. A lot of planning, organizational, and process mistakes we make as managers can be mitigated if we’ve helped assemble a terrific team; on the other hand there’s only so much pulling on those same levers can help us if we’ve made poor hiring choices. Your research computing team members are the people who do the work of supporting research with working code, data curation/analysis/systems, or computing systems. Putting that time and effort into hiring that makes it time consuming and tiring is absolutely appropriate.
Hiring, like anything else in management, is a practice that you can improve on by having a process you go through that you learn from and improve each time through. That means being a lot more deliberate about hiring (or really any other aspect of management) than we usually are in academia-adjacent areas. It’s also, to be honest, more work. But hiring is the most important decision you’ll make as a manager. Decisions you make about new team members will last even after you leave. A good hiring choice will make everyone’s job easier, and a poor hiring choice will make everyone’s job worse.
The larger organization I’m part of is setting up a proper hiring pipeline/funnel, so the whole process has been on my mind this week. And every research computing team leader I’ve spoken to in the past couple of years -including one I just met this week - has described the same issues about hiring . So for the next several weeks I’ll write up how we approach hiring, and what we learn as we proceed with our hiring pipeline. The topics will look like:
Approach to hiring - saying no (this week)
Defining good hiring criteria - technical skills are important, but pay more attention to social competencies and match to how the team works
Interview questions, behavioural and technical
Setting up a pipeline
Hiring is recruiting - filling the pipeline from the far end.
The most important thing I’ve learned about hiring after leaving academia I first heard on Manager-Tools - here’s one of the relevant podcasts - but once you see it you see it in all kinds of advice, including in advice on how to reduce hiring bias.
The purpose of the screening and interviewing process is to find a reason to say no; to discover ways in which the candidate and the job would not be a good match
Managers from research that I’ve met tend to reject this. We’ve been trained in an environment which emphasizes “meritocracy”, and for students and postdocs and junior faculty everyone one is looking for the “smartest” and “best” candidate with “the most potential”.
But none of that is right for us. Honestly, it’s not great for academic positions; it’s definitely wrong for anywhere else.
We’re hiring a new person because we have a problem; the team has things that need doing. It doesn’t matter who’s “smartest” or “best”, even if we knew how to reliably assess those things, which we absolutely do not. What matters is how well a job candidate can do the things we need them to do, and how well the tasks we have match what the candidate is looking for. The “smartest” candidate who has skills the team already has in abundance but lacks the things we really need in this new position of can’t help us. The “best” candidate who is looking to do things that are radically different from what we need them to do is going to resent the job and leave it as soon as possible if we do hire them.
Coming from research we still tend to reject this, either because we’re still looking for “the best potential” like we’ve been trained to do, or for the opposite reason - because we see it as toxic and biased (which it is) and we don’t want to do anything which looks for negatives, which smacks of “winnowing out”, “raising the bar”, “keeping candidate quality high”, etc. But that’s not what’s going on here. We need to think about it as people trained in research.
Once we’ve selected people for the next step, whatever that step is, the hypothesis is that they are suitable candidates for the job. That must be the hypothesis, or we wouldn’t be wasting their and our time. And so, as scientists, the right way to proceed is to attempt to find evidence that contradicts the hypothesis; evidence that they wouldn’t succeed in the job. We are not searching for evidence that a person would be a good hire; there is nothing that is easier than fooling yourself into confirming a hypothesis! Confirmation bias and the halo effect are the enemy here. They allow all sorts of other biases to sneak in (and not necessarily even nefarious biases, like the classic “looks like/thinks like me”; it could just be unconsciously over/under-weighting certain parts of the job based on experiences with previous hires). We want to find evidence that this person would not succeed in the day-to-day technical work and teamwork we’d be asking of them.
In French class in school, I used to do dictées, where the teacher would read aloud and we’d transcribe into written French. We’d start with 100% and lose points for every wrong answer. That’s the approach we’re taking here. We have a list of must-have skills or approaches - technical but also team-related - and we look for evidence that the candidate does not have what we need them to have. Except they lose all the points when they’ve failed to demonstrate they have a must-have requirement for the job.
You can absolutely still be biased - your bias against a candidate can be so strong as to blind you to the fact that they have in fact demonstrated such-and-such a requirement. But it’s harder - not impossible; harder - for us humans to deny evidence that’s clearly in front of them than it is for us to choose to see evidence that they do have the skill, or at least they’ve shown the potential for it. That’s why the scientific method is the way it is. You find evidence to disprove, not to support, the hypothesis.
Once you’ve gotten used to the idea that the hiring process is one where the purpose is for each side of the potential match to find reasons to say no, a lot of things become clearer — what the process might be, what job ads should say, and what your approach you should take to criteria and questions. We’ll talk about criteria and questions next time.
For now, on to the roundup!
Managing Teams
Make Boring Plans - Camille Fournier
Riffing off of Chose Boring Technology, Fournier advocates for making boring, plodding plans - well thought out, well communicated, based on trying things, making sure they work, and incrementally implementing.
While it’s absolutely true that teams can be very motivated by audacious, ambitious goals, the plans for getting should be nice and boring. This is especially true when one has to incorporate excitement somewhere else; “Novel Technology Deserves Boring Plans”.
4 Lessons Learned in 2020 as an Engineering Manager - Luca Canducci
I’ve been avoiding 2020 retrospective articles - honestly, who really wants to look back - but Canducci’s lessons and descriptions here are good:
Understand your role and its expectations
Momentum is often more important than skill
Address performance issues early and at full tilt
Writing is a Superpower
Markets, discrimination, and “lowering the bar” - Dan Luu
There’s a common, dumb argument that there can’t be sustained discrimination in a competitive hiring market place, because competition would have gotten rid of any such inefficiencies.
Needless to say trying to negate actual reality with pure thought doesn’t work well, and Luu’s article here puts this article to rest. This is an older article that is extremely relevant to the hiring process text above; how you aren’t trying to hire some “best” candidate out there - and that even if you were, making steps to ensure you have a diverse candidate pool and making a point of hiring candidates who face discrimination is the very opposite of “lowering the bar” and so presumably having worse outcomes.
Managing Your Own Career
Questionable Advice: “How do I feel Worthwhile as a Manager when My People are Doing all the Implementing?” - Charity Majors
The Non-psychopath’s Guide to Managing an Open-source Project - Kode Vicious, ACM Queue
Majors’ article is a good reminder for new managers that it’s really hard to recalibrate job satisfaction or the feeling of accomplishment when you’ve moved into management. All you can do is focus on the big, long timeline stuff while still taking joy in the little moments, and make sure that you’re a source of calm not chaos in a crisis.
Vicious takes on the same topic but from the point of view of a new open source maintainer, which is managing a software development team in hard mode. Most of the same rules apply.
Product Management
Strengths, weaknesses, opportunities, and threats facing the GNU Autotools - Zachary Weinberg
If you’re familiar with autotools - maybe you’re not - this is a really interesting and transparent very transparent product-focused assessment; a simple but thorough SWOT analyses of the current GNU Autotools stack, which hasn’t been updated in some time (which itself makes the updates harder since the entire process is “rusty”), and which has enormous legacy baggage, but still has opportunities.