From the Archives: #60 - Quarterly goal setting & review sessions
Plus: The resilience of mixed-seniority teams; Gaining mentors; Product manager skills; Postmortems from scratch
Hi all - I hope you’re enjoying the summer!
For the next few weeks, I’ll be sending issues from the archives, interspersed with a few short new posts; as summer winds down we’ll go back to normal posting. Let me know what you think, and enjoy!
Hi there - if this is a long weekend for you, I hope you’re enjoying it.
Last time we spoke a little bit about expectations, and routine feedback to team members and peers when those expectations are met or not met. This time, let’s consider longer-timeline expectations: goal setting and review.
Feedback is a mechanism to align expectations with your team members in the moment, and to encourage meeting those expectations in the future. Sometimes those expectations were explicit; other times, they were implicit and it’s a helpful way of making them explicit. This is a simple, extremely useful and scandalously underused tool, particularly in research environments. What’s more, your team members want feedback. Do you want more feedback from your manager about how you’re doing? Why do you think your team members feel differently than you do?
On top of the small course-corrections of routine feedback, it’s important to have regular conversations looking back at previous goals, and setting future goals. Here, the expectations are very explicit - you are setting goals, and looking back to see if they’re set.
Our organizations probably have an annual review process set up for this. They’re often pretty poor. What’s more, a year is just an absurdly long time in many ares. For most of us, our work is changing rapidly; the idea that today we should have a pretty good idea of our work from now until April 2022 is just goofy.
Quarterly is a pretty good cadence to review work, learning, and career development goals with team members. Twelve-ish weeks is long enough to accomplish meaningful work, while being short enough that priorities probably haven’t shifted dramatically in the intervening time. These goals are things you absolutely can and should be talking about in one-on-ones, but setting some time aside every quarter just to have these goal-review and goal-setting conversations helps clarify expectations about goals, ends up with them written down, and gives team members clarity about what their priorities are. The resulting document is also something that can inform one-on-one discussions.
A template for the document I use for such reviews is available as a google doc here; I show an excerpt below. By having it as a google doc (or Office 365/sharepoint document, or whatever tools your team use routinely), it can be kept as a running document (most recent review on top), collaborated on, and frequently reviewed. What will be most useful for you and your team may well be different. I use these reviews as an occasion for a bit of a deeper check-in on how things are going in areas that sometimes get overlooked in the more day-to-day focus of one-on-ones.
The mechanics of these reviews is that we schedule a meeting outside of our usual one-on-ones; an hour is generally enough for a team member whose done this before, it might take more than that for a team member doing it the first time. I update the document by adding the review for the new quarter, taking goals set last review and copying them in; then each of us adds starting notes. The document covers:
Questions for discussion - finding out what they were proud of, struggled with, and learned in the last quarter, and anything they’re excited or anxious about in the next;
Reviewing past goals on, discussing whether they met expectations, and setting next-quarter goals on:
work outcomes
career development
skills development
Discussing what they need to work on in light of how they did on those goals, or due to other things that have happened in the last quarter - nothing in this discussion should be new or a surprise, it should all have been raised before in routine feedback
Setting, with their input, goals for the next quarter.
The meeting then discusses the starting notes from myself and the team member, and then agree to summaries and commit to future goals. Having their input on these sections is extremely valuable; it increases the commitment to the goals.
The first time a team member goes through this, it can be a little scary - people have had or heard of pretty terrible performance review experiences in the past, and they often don’t realize it’s an opportunity for a conversation about what’s going well, what haven’t, what priorities to work on, and their own learning and career development goals. To make it a little less difficult to start this the first time, when onboarding a new hire we immediately set 30 day goals in the worksheet, then after a month set goals for the remaining 60 days of the quarter. At the end of their first quarter we go through the sheet together for the first time, but it’s at least partially familiar to them so doesn’t seem so daunting.
Do you have a similar process? Have you seen anything similar or that you find works very well? Let me know - hit reply, or email jonathan@managerphd.com.
For now, on to the roundup!
Managing Teams
The resilience of mixed seniority engineering teams - Tito Sarrionandia
An ongoing if unintended theme of the newsletter is that when managing teams, many useful things - like everything involved in having the team move to distributed work-from-home, giving feedback, having quarterly goal-setting - come down to making things more explicit. That requires a lot of up front work, more documentation, change of processes, and a little more discomfort for the manager initially - but then make a lot of other things better and easier for everyone.
Sarrionandia talks in this short article about the advantage of having teams with a range of seniority in exactly this light. Having junior staff on the team means that more resources have to be dedicated initially to explaining how things work, documenting processes and tools, etc. But those steps of making things more explicit make things work better for everyone. It makes it easier to bring new people onto the project, junior or senior. Those now explicit steps can be put into playbooks or automation scripts (or conference talks, or papers). More work initially, but that work pays off.
Measures of engineering impact - Will Larson
I’ve mentioned before that as a manager, we measure something to inform a decision or action. We’ve talked about measuring the productivity of technical teams - you have to look at the team level, not the individual, and pick metrics that indicate something getting in the teams way, something that you can change. The measures inform an action. That’s useful; you can arrange for fewer things to be in your teams way.
But measuring the impact of the technical teams is really what we want to accomplish. You want your organization to have as much impact as possible. We owe our team members work with meaningful consequences, and we owe the research endeavour as much help in the advance of knowledge as we can offer.
Larson and some of his colleagues discussed this and found that a number of big tech companies have almost comically simple measures used internally for impact, that are straightforward, centre on the things they care about, and are hard to game:
Squre - new billiable features
Gusto - number of competitive advantage created/improved
AWS - number of comms-approved press releases
As people working in scholarly research, one of the skills we’ve developed is ways to disprove or provide evidence for the claim “X affects Y” by choosing one or more proxy measurable quantities to observe. This is one of our outsized skills. Choosing simple metrics can be a very effective way of demonstrating impact externally and informing decision making internally.
In research computing, some of our measures take some doing but are inarguably signs of impact - amount of use, papers published, citations, contributions. Which ones make the most sense will depend on what your teams work on; but any of them or any related metric is 100x more meaningful that input measures like utilization, lines of code, or data entries.
Managing Your Own Career
One mentor isn’t enough. Here’s how I built a network of mentors - Erika Moore
We’ve talked about assembling a group of mentors before, such as in #60. People by and large are more than happy to give advice and suggestions to others coming up in their field. Here Moore, writing in Science’s careers section, gives very specific and useful steps about how to build a network of people that one can ask for advice:
Cast a wide net
Get to the point - send short emails with very specific asks (which requires clarity on your part of what you want from them)
Come prepared for those who do say yes
Consider the context - things that worked for them might not work for you, and people may have a lot going on right tnow and be unable to help
Product Management
Product Manager Assessment - Sean Sullivan, ProductFix
In research, we typically learn on the job a rough-and-ready form of project management, which can work passably well for research projects where everyone is relatively well aligned and needs the project to go well.
In managing teams though we often also need to focus on managing products.
In this article, Sullivan walks through a product management assessment with three high-level components - product expertise (including product/market fit), product management and skills (shown below), and people skills and other competencies.
I think this would have to be changed quite a bit depending on the context. But in terms of illustrating the breadth of needed skills, and areas to be aware of when managing multiple products, I think this is valuable.
The Zero-prep Postmortem: How to run your first incident postmortem with no preparation - Jonathan Hall
It’s never too late to start running postmortems when something goes wrong. It doesn’t have to be an advanced practice, or super complicated. Hall provides a script for your first couple. The focus is on complex systems and software, but it can be applied to everything.
I’d suggest that once you have the basic approach down, move away from “root causes” and “mitigations” and more towards “lessons learned’. Those lessons learned can be about the postmortem process itself, too; you can quickly start tailoring it for your team and work.