#54 - 19 Feb 2021
Interview questions; measuring productivity (blockers); give tough feedback; name what's happening; building alignment around a new strategy
Hi, everyone:
I hope you’re doing well! I have two questions for you.
This week kind of got away from me - it’s been an exciting couple of weeks, with two new team members joining and our project getting ready to take on new responsibilities - but it means that the final instalment on research computing hiring will be delayed until next week.
So I’ll ask two question that I was going to ask at the end of the series.
First - I was starting to feel that the article plus the roundup was kind of unwieldy. (That’s why I started adding the ‘skip to the roundup’ link). Between the article and the roundup, we were starting to get up to 5000 words - and that’s not even counting the job postings. Even the roundup alone seems like it might be too many words sometimes.
So, what do you think? Should I split them out into two days posts - say new content on Monday, link roundup on Friday - with the option to subscribe to either to both? Or just keep on with how they are? Any other suggestions?
Second question - What should be next for the article content? I have my own topic list:
Performance management - feedback and goal setting - for research computing teams
Why research computing can learn some grant writing tops from nonprofits
Strategy and product management for research computing
But I’d very much like your suggestions for topics - or for other features (enough time may have passed and new members joined to start up AMA - Ask Managers Anything - again).
So let me know what you’d like to see - as always, suggestions, feedback, and comments are greatly welcomed. Just hit reply or email me at jonathan@managerphd.com.
But for now, the roundup!
Managing Teams
Series: Unpacking Interview Questions - Jacob Kaplan-Moss
This series of questions ties in very nicely with last week’s discsussion of evaluating against requirements. Kaplan-Moss has five interview questions that he covers - maybe they’d be good for your roles, maybe not - with very carefully thought out rubrics for each:
The value or skill that the question is designed to measure.
The question itself
Follow-ups to ask
Behaviors to look for
Positive signs / red flags in the answer
Getting to this level for your own interviews will take a lot of work, and may never happen for some requirements if, as for most of us, your requirements change more rapidly than you hire. But this standard is an excellent one to aspire to, whether your approach to evaluating against a particular requirement is an interview question or a skill-testing simulation of work product like a technical test. What constitutes a good response, and a poor one - and if you can’t clearly express an answer to that, is it really a good evaluation method? How should you follow up on a response/submission?
Engineering Productivity Can Be Measured - Just Not How You’d Expect - Antoine Boulanger, OKAY
A while ago we had a flurry of “measuring developer productivity” articles, mainly pointing out that the idea was a bit misguided.
There’s a management book, “How to Measure Anything”, which I think of as “Error Bars for Business Types”. Fine book as far as that goes; not really for us as an audience. But one point that book made stuck with me - as managers, the purpose of a measurement we make is to inform a decision, and thus action. We don’t measure “just to keep an eye” on something.
I think when managers daydream and imagine measuring team member productivity, the tendency is to picture some dashboard we can look at every so often while we’re doing our own stuff, and only switch into manager mode and intervene when the dashboard flips from green to yellow. “Hey, Jason, it seems like there’s some kind of problem. Tell me about it”. It’s a lovely thought and just completely divorced from the reality of managing a team of humans, where we have to constantly talk and work with people - team members, stakeholders - to understand what’s going on, find out what’s needed, remove blockers, and to help everyone stay pointed in the same direction.
For individuals, there’s nothing much more you can do than routinely set goals with team members, and then together assess whether or not those goals were reached, why, and what can be done differently in the future. It’s labour intensive and requires subjective judgement and that’s about all there is to it.
As Boulanger points out, however, there absolutely are useful productivity measures we can keep track of and act on - at the team level. The idea is to track things that are slowing the team down - is tooling inadequate? Are processes leaving tasks idle for too long or creating too much work in progress? Are people’s days sliced up into too many chunks with meetings? - and do something about it as a manager.
Boulanger suggests that we:
Look at our team’s calendars - quantify how sliced up their time is, look at what meetings can be streamlined/made asynchronous
Check our team’s ticket tracking logs - are their tickets sitting for long swaths of time? Why?
Survey our team - are they happy with tooling? Are they happy with on-call duties, if any? What changes could be made to make them happier and more productive?
And turn these questions into a modest number of metrics to focus on and improve. Of course, this flips the script on what some managers in tech imagine the purpose of the productivity metrics are for. Rather than turning them on the team members (“You’re only scoring an 18.3 not he productivity meter! Productivize more!”) it means there’s work for the manager to do. Such is the reality of our job.
Managing Individuals
Tough Love for Managers who Need to Give Feedback - Lara Hogan
Stop Softening Tough Feedback - Dane Jensen and Peggy Baumgartner, HBR
Both Hogan and Jensen & Baumgartner’s article tell us not to soften our critical feedback. If we’re not frankly telling our team member how they’re not meeting expectations, then how can we possibly expect anything to change? And if we’re not trying to change future outcomes - by learning that the expectation wasn’t reasonable, or by having the team member meet the expectation in the future - why are we even having the conversation?
Hogan adds another four points:
Your feelings have no place in feedback for your reports. They are your responsibility, not theirs.
Similarly, you don’t get to assume why someone’s behaving the way they’re behaving. Your beliefs about their internal state are also your responsibility, not theirs.
If you can’t own the feedback, don’t give it. We talked about the dangers of “triangulated feedback” before; take it from me, just say no.
Be honest with yourself about whether or not you think this person can meet these expectations.
Jensen & Baumgartner focus more specifically on “sandwich feedback” - softening critical feedback by surrounding it with positive feedback. There’s actually data on how poorly that works - it’s transparent and frankly a little condescending. You should absolutely be giving lots of positive feedback as well as critical feedback, so that your team members know that you see the times they meet or exceed expectations as well as when they don’t - but you give the positive feedback as it arises, not as a diluent for corrective feedback.
They also recommend, after pointing out the behaviour/observation that’s causing the feedback and the impact the failure to meet expectation is having, you tell them what you’d like them to do next. I’d caution about that - it’s not what they mean, but it sounds like “you should do it this way in the future”, which isn’t what you want. You don’t want to be prescriptive about how your team members do things, you want to be clear about the expected outcomes and let them have the choice about how to get there.
Managing Your Own Career
The skill of naming what’s happening in the room - Lara Hogan, LeadDev
There are a number of meeting skills which can certainly apply to manage a team but are powerful skills to have in other contexts, too. In what may be a first for the newsletter, Hogan appears twice in one issue with articles from two different venues. In this LeadDev article, she describes one powerful and under-used skill - actually describing what’s happening in the meeting. An example from the article:
‘Hey, let’s just take a second to check in: it feels like maybe we are going in circles on this.’
As people partly in (often academic) research and partly in tech, we as a population are disproportionately … less than enthusiastic about talking about feelings or individual reactions to stuff. Life of the mind, and all. That’s what makes this a bit of a superpower; “I get the impression the software developers are less enthusiastic about this idea, is that right?” or “The research team has repeated that point a couple of times - I get the sense they feel like it’s not being fully understood, is that a fair interpretation?” can completely un-stick a meeting which has fallen into a frustrating rut.
Even if you’re pretty junior in the context of some particular meeting and not feeling comfortable calling these things out, just learning to make these observations and put them into words in your own mind is an incredibly powerful skill to develop.
Mailbag: Building alignment around a new strategy - Will Larson
This was written in the context of setting a technical strategy within a technical organization, but it works just as well in a context where you’re working with stakeholders and researchers about a product or project strategy.
The approach Larson describes will be familiar from the discussion about “change management” and stopping doing things in #58 - there’s no way around it, it’s labour- and time-consuming. You have to first make sure everyone agrees on the problem, then build towards a solution, and shop that around getting input.
Most of us aren’t working with communities of a size where a working group makes sense - it’s our job to put together an initial solution proposal (or a couple of options), then pre-wire that information, iteratively gathering feedback, getting ready for a presentation where a solution is proposed. But the basic approach is the same. If it sounds like a lot of work, it is! Getting wide agreement about something new, or even worse a change to something old, is genuinely difficult, there’s no way around it.
Random
ENIAC was announced 75 years ago this week.
For me, a big driver for SaaS solutions for to-do lists or writing tools was being able to have access to the stuff at work and at home. If we’re working from home indefinitely, is that even a thing any more? Imdone is a Trello-alike and Ghostwriter is a blogpost or other editor, all based on local directories of Markdown files.
Really interesting looking free ~300page book - an introduction to ML specifically focussed on prediction and decision support.