#53 - Making interview questions from job requirements
Plus: Giving tough feedback; Coaching tools; Your team's writing and what it says about the team
Hi!
So you’ve got a list of hiring requirements written up - the next step is to think about how to evaluate candidates against the requirements.
Once you have a pretty clear list of job requirements and a sketched out job description that the team members and other stakeholders have agreed on, the next step, before even a job ad, is to figure out how you will evaluate candidates against the job description.
I’m writing as if it’s a linear process - define requirements, then figure out evaluations - even though it’s much more likely to be iterative. A requirement has to be clearly defined enough to be relatively straightforwardly and unambiguously evaluated, for you and the candidate - that’s one of the main purposes of the requirement! - and it often won’t be right away.
A clearly defined requirement will immediately suggest some methods of evaluation. “Tell me about a time when you debugged a service outage on a linux server under time pressure” is a good and interesting interview question. “Do you have intermediate linux sysadmin skills” isn’t. So as you start figuring out evaluation approaches, some of the job requirements will be sent back to the team for clarification.
As with the description itself, putting together first version yourself and then collecting input from the team members and stakeholders the new person would be working with will help you put together a better process, and make sure those who are asking the candidate questions will be thinking along the same lines.
The fact that we’re looking for reasons to say no against our carefully-selected list of requirements and activities means a couple of things:
During the various stages of the evaluation, you won’t be rating the candidate against other candidates; instead, you will be rating against the bar set for the job, and at the end of the interview, the team should have a clear pass/fail decision for the candidate to make it to any next stage.
Consequently, there is no point in asking any question in the interview that doesn’t have answers that would disqualify the candidate as not meeting the requirements. Asking puzzle questions for “bonus points” or looking at other reasons to try to find that the candidate might be a good fit after all opens up the door to bias and to candidates who don’t meet the requirements you’ve set.
For 1, evaluation, post evaluation those involved should be able to quickly make a firm decision about whether the candidate proceeds to the next step. Whether a technical screening, a short screening interview, or a longer interview, team members should be keeping track against the list of criteria you’ve agreed upon. For us, for instance, for a current position that might look like:
“Cultural fit” and soft skills for working our team:
Have they demonstrated they can work with minimal supervision given goal and constraints, seeking input when needed
Have they demonstrated being able to work closely in a team for specific projects
Have they shown they’re willing to initiate communication and collaboration with team members as needed
Can they clearly describe a process and respectfully advance a technical argument in writing
Have they demonstrated an ability to ship things that others can build on
Have they shown they can learn new things independently (with help available) as needed
Have they shown they can incorporate constructive feedback and corrections into the next iteration of their work
Have they shown they respectfully offer constructive feedback and corrections to others
Do they value working as part of a diverse team of people with multidisciplinary expertise and varied backgrounds
Technical skills:
Have they shown success in the past working in the an adjacent area of [thing we’re hiring for - could be databases, restful services, etc]
Given an overview and architecture diagram of a proposed implementation of a backend service, can they clearly and thoughtfully express the tradeoffs made and pros and cons in the area(s) they are familiar with
[Intermediate level] - Hae they successfully shipped code working in more than one programming language in the past
Have they demonstrated success in debugging other people’s code
Can they write bash (or other linux shell) scripts for automation of collections of tasks
Are they currently fluent enough with at least one of Python, JS, or Go to be able to submit PRs for modest functionality improvements to an API service
Are they fluent enough with at least one of Python, JS, or Go to offer thoughtful code review of such a PR
If there is evidence of a significant deficit in any of those skills, that’s evidence that they are not going to succeed working on our project and it won’t be a great match for them or us. If they could succeed with that deficit, it means we’ve defined our requirements - or our definition of success - incorrectly. If they have those capabilities and don’t succeed anyway, it means we’ve missed something.
For 2, no non-disqualifying questions — this is why even Google doesn’t ask puzzle questions any more - why are manholes round, how many ping-pong balls fit in an airplane, etc. Questions that hinge on having some key insight during the stressful, artificial, and time-limited period of an interview - just aren’t helpful. Them not getting the answer doesn’t tell you they can’t do the job, so there’s no point in spending precious interview time asking it.
This also really limits the scope of technical questions you can usefully ask and expect answers to in real time. If they need some particular fact in their head, or to see some key insight, or it’s just too much to pull off in a stressful hour under artificial conditions, it doesn’t disqualify them from the job, so doesn’t help you make your decision. We know that live coding interviews test for confidence more than coding ability.
There are technical questions which work well live, or as “take home” assignments, and we’ve tried to define the technical requirements above so that some of them lend themselves to simplified versions of the tasks they would be doing every day, E.g., “here’s some back end code that implements a simple API. Walk me through it; you can ask as many questions as you like”. Or “here’s a simple repo with an issue; submit a PR to fix the repo”. Or “here’s a simple repo with a PR; provide a review with the PR”. Other teams have had good luck with pair-programming a PR for an issue.
You want it to be hard enough to be in some way representative of the work, but simple enough that not completing the task adequately in some short amount of time genuinely is evidence against the hypothesis that they have the skill. Setting that level just right takes some calibration - that is, some experience - but the fact that you’ll improve questions over time is a good thing, not a bad thing, and it doesn’t absolve you from trying to ask the question even when you’re first starting to interview for the position.
These simulation-based testing of requirements can work for very technical skills, in small doses. Lots of fields do them - accountancy, for instance - and they can be useful! But we over-rely on them in computing. It reduces “making an assessment of a candidate ”to the more-comfortable-for-tech-people task of “making a technical assessment of code/architecture diagram invented on the fly”. Whatever tasks you use they’re going to be poor simulations for the day-to-day research computing work, so it’s not as dispositive as you might think. Worse, in the end, there’s only so many technical skills you’re going to be able to test in an interview, or so many take home questions a candidate will be willing to answer.
Luckily, you aren’t limited to what they can do in one or a few hours - you have all the accomplishments of their entire career you can dig into! So the meat and potatoes of any interview, even a technical one, rather than a simulation-based competency assessment, is going to be a behavioural interview; “tell me about a time…”.
You want to ask questions that probe the skills and behaviours you need. Hopefully the requirements you’ve defined are clear enough to lend themselves to this. Honestly, going through all your requirements and asking “tell me about a time when [you demonstrated requirement X]” is a pretty decent starting point for planning an interview, if you’ve got your Xs in order. Also note that behavioural interview questions can be extremely technical, if “requirement X” is a technical one! When you’re asking behavioural questions, the key is to dig into the answer a lot. Every time they make a decision in the story, politely interrupt and ask why; really dig down.
This is absolutely vital. Even if the story they’re telling is a huge success, if they can’t convincingly describe why they made one choice over another, it’s unlikely they’ll be able to repeat the success. And of course if the story is a huge failure but the decisions were reasonable given what they knew, and they’ve demonstrated they can learn from it, the story needn’t be disqualifying.
We’re getting pretty close to the end of the hiring series - we’ll talk next week about the actual interviewing process and setting up a pipeline.
For now, on to the roundup!
Managing Individuals
Getting Over Your Fear of Giving Tough Feedback - Said Ketchman, The Introverted Engineer
Research: Men Get More Actionable Feedback Than Women - Elena Doldor, Madeleine Wyatt, and Jo Silvester
We’re people who went into both research and computing, and so as a population we are disproportionately task-focussed and introverted. That can make giving negative feedback - especially about work practices, maybe less about work outcomes - deeply uncomfortable. And humans avoid doing things that make us uncomfortable!
But your team members deserve to know when something they’ve been doing doesn’t meet your expectations. You wouldn’t want your boss to to keep feedback from you if there was something they thought you could do better, and it’s unfair to do the same to your team members. You’d want them to communicate it with kindness, but with clarity. Ketchman talks about three steps in getting over the discomfort of giving tough feedback:
Be clear on the purpose
Build trust first
Keep practicing and improving
Having a ready-made formula for giving feedback can be a huge help for this. Whether it’s the Manager-Tools feedback model or the very similar Situation-Behaviour-Impact model, it helps you give a structure you can practice on while keeping the purpose - improved future outcomes - in mind.
Building trust of course comes from regular one-on-ones, and from holding up your end of responsibilities when you’re asked to do things for your team member.
There’s an equity reason to get over discomfort about giving corrective feedback, too. Doldor, Wyatt, and Silverster looked at feedback given to 146 mid-career leaders, provided anonymously by more than 1,000 of their peers and managers and fount that corrective feedback was given less to women, and even positive feedback was given in a more fuzzy way that is harder to translate into concrete next steps. This was true of both men and women bosses of women. The more comfortable we can get giving concrete feedback, positive and negative, to all our results, the better they’ll be able to grow and develop. It’s not fair to let team members repeatedly fail to meet our expectations and stay silent about it just because we’re uncomfortable bringing it up.
Coaching and Feedback Tools for Leaders - Ed Batista
Batista has a list of resources he’s written or found helpful on the purpose and delivery of feedback and coaching. Manager-Tools Basics is very good on this, but getting another perspective is helpful.
Managing Teams
How your organization’s writing reveals its problems — and potential fixes - Josh Bernoff
As written, asynchronous communication becomes more and more important to our teams, the first goal was simply to move conversations to async written format. Now that we’re settling in to this mode, it’s good to take a look at the documents your team is generating and see if they can be improved.
I don’t think the issues Bernoff identifies are necessarily symptoms of the problems he suggests, but they are areas that could be looked into - and those problems with the documents could be usefully addressed (with feedback!) at any rate. A few that struck me:
Too long/short - Is this just box checking rather than being helpful?
Too much passive voice - not enough ownership being taken in the team?
Too relentlessly positive - are there issues around cantor in the team?
Project Management
Snowflake retros - Mike Crittenden
Retrospectives are useful for all kinds of work, but they’re often only routinely done in software development teams. That’s a mistake!
Retrospectives (or hot-washes or after action reviews or any of a number of other names) are a fantastic way of taking stock and learning from what’s happened recently. Often that’s at the end of a project, but it needn’t be.
If they are done routinely, they can get a little boring or stale. Crittenden suggests avoiding that by making each one unique - setting up a rotation of both the person who runs the retro and the format of the retro (there’s a link to 10 different retro formats) to have them regularly changed up.
As a postscript - I can not recommend highly enough the approach of having team members rotate through running regular team meetings. Builds an important skill in your team members, makes people more willing to take charge of things, you get additional perspectives, etc.
Random
For better or worse, with the addition of user-defined functions, Excel is now inarguably a full-fledged programming language.
Papers With Code, the project that’s highlighting ML preprints that include code and bundling them together, is now indexing data sets.
I’ve talked about incident reports here for software and systems - here’s a burgeoning list of AI Incident Reports for incidents in deployments of AI/ML systems.
Data visualization of 67 years of Lego sets.
Data visualization of 30 months of electricity usage in Manchester, 1951-1954, using cut out cardstock.