#176 - Scientific Systems are Uncertain; Human Systems are Ambiguous
Plus: Feedback is for behaviour change, not self-expression; trust lets you see reality; change is risky, and makes things worse before improving them; writing usefully
I hope you enjoyed the summer (or winter, depending) and that things are going well!
I’ve been officially out of academia for a decade now; but to me, the start of September still feels like the real start of the New Year - things starting back up again after people have been away and have now returned, recharged and full of new ideas.
We’re back here at Manager, Ph.D. world headquarters, raring to go and ready for the push to finish the year strong!
I was talking to a new manager with a STEM PhD, and watched with some surprise as they subtly but literally physically recoiled from a situation that was unclear.
Talking with them later, I tried to get to the heart of that reaction — yes, there was a problem to be solved, but they were accomplished problem-solvers. Yes, there was a gap in knowledge about the situation, but isn’t, like, all of scientific research about identifying and resolving gaps in knowledge?
Talking the situation through with them reminded me of something pretty important, something I’ve glossed over quite a bit in some of my writing to you in the past. Throughout grad school in STEM fields, we get quite used to dealing with uncertainty. But we rarely have to confront ambiguity.
In our undergrad and early grad school career, there’s always an answer in the back of the book.
Once we start performing research, we’re asking questions that haven’t been answered. There’s no “back of the book” - yet. But, we hope, we’re helping write those new editions of books, ones that will have the answers, by discovering new knowledge.
Our undergrad and early graduate education is about learning a canon of knowledge in our field. Once that is done, we know the fundamentals, and we’ve learned how to learn - we can read papers and books and listen to talks to come up to speed in other parts of the field.
From there the work is learning to deal with uncertainty. We don’t know if our experiment will work; we have to figure out which simulation is possible and will teach us what we need to know; we don’t know how much we can trust the data set we’ve been given.
But we quickly figure out to deal with that, right? You can logic and plan and measure your way through uncertainty, and we’re very good at logic-ing and planning and measuring.
There’s some set of notional error bars, and we know how to beat them down - it’s about planning and (to some extent) risk management. We read and we think through scenarios, we scribble down some decision trees - if this doesn’t work then we’ll try this; if we see the effect at all with this detector or assay or sensor then we can improve accuracy and precision by moving to this approach.
But as your responsibilities grow, less and less of your job becomes quantitative and based on natural-science systems, and more and more of your job becomes working in human systems. Once you become a manager, the bulk of your work is on guiding and nurturing a human system, and interacting with humans in other human systems.
Human systems aren’t merely uncertain, they’re often also ambiguous.
Uncertainty means the outcome is known, but in our case the parameters of the problem are pretty well-defined, and that’s our forte.
But ambiguity arises when the problem isn’t defined, or there are conflicting possibly contradictory interpretations or aspects to the problem. There is no right answer; there’s not even a proper well-posed question.
Systems of people generate ambiguous situations all the time, though, through:
Unclear communication, and
Lack of clarity in who’s responsible for decision making or tasks, and
Competing goals and incentives, and
Leaders intentionally leaving things flexible (basically delegating ambiguity-handling), and
Shifting priorities, and
Possible future changes in the environment, and..
But you can’t measure your way through ambiguity.
You can’t logic your way through ambiguity.
So with some of the tools we’re strongest with no longer working, it’s easy to kind of freeze and not know what to do.
It’s worth pointing out here how much more capably trained our social science and humanities colleagues are in dealing with ambiguity; ambiguity in those fields isn’t a bug, it’s just how things are.
Which is to say, people know how to deal with ambiguity, and it’s something we can become more comfortable with. There’s no right answer, but there’s frequently a number of good enough answers. That requires of us a decision, a choice, which can be uncomfortable. But that choice only requires the same judgement and discernment we are starting to learn towards the end of grad school, when we’re deciding on not how to solve a given problem, but on which problem to work next.
We’ve talked about some ways to reduce ambiguity in the past:
Issues coming from others’ clarity in communication, or even to some extent competing goals and incentives, can really only be resolved through talking to people. Talk with people across the organization (#171); sometimes using their communications style can help (#164).
Those competing goals and incentives can also be helped by building buy-in (#147). In a way, test-running different approaches is a form of experimentation, and..
Experimentation can be useful quite generally (#165) - here the idea isn’t to find The Answer, but to attempt to see what works, because often Action Brings Clarity (#174).
We can get comfortable with Management Decisions Under Ambiguity (#158), because getting used to making small decisions fast, aiming to routinely make “B” decisions and not (likely unachievable) “A+” decisions, will help us learn faster.
We can avoid creating unnecessary ambiguity for our team and peers by being clear in our own communications. That can mean having the hard conversations (#151), using feedback to make expectations clear (#140, #152, the guide) and not copping out by just creating spurious processes instead (#139), making sure it’s clear what peoples jobs are when they join (#132), and prioritizing clarity when running projects (#150).
Now, none of this makes working under ambiguity easy; none of it creates a Right Answer that you can then get, and even just choosing a decent next step can involve a lot of talking followed by uncomfortable judgement call.
But just knowing there isn’t a right answer, or an optimal choice, can help a lot; and practicing making small low-risk choices in ambiguous situations can grow your comfort and develop your intuition for handling them.
Do you have an example of facing an ambiguous situation and seeing your way through it? Share it in a comment! Or if you need some advice on a current ambiguous situation, email me or even just book a 15 minute call.
And now, on to the roundup!
Managing Individuals
Strategy, not self-expression: How to decide what to say when giving feedback -
,Giving feedback on something subjective -
,Giving feedback to your team members, peers, or even those higher up than you is a very direct way of reducing unnecessary ambiguity about a very clear question - “did that work/behavior/decision meet our expectations and standards”.
Or, less charitably - withholding our feedback from them, keeping to ourselves information that they are entitled to know but would make us feel awkward to share, needlessly introduces even more ambiguity for the humans in your team or organization who are already wrestling with way too much.
Kao reminds us that the point of feedback is not to express ourselves:
When someone says they are open to feedback, it does not mean you should share all your frustrations. Contrary to popular belief, this is not your chance to express how you feel.
Especially when we’ve held back some piece of feedback for a long time, there’s a natural tendency to just want to “get something off our chest”. But the feedback conversation isn’t about us, it’s about making some change in the future:
If nothing changes, then the whole conversation—and all the emotional anxiety on both sides—was wasted.
Kao gives a very nice framework on doing this - (please read the whole article)
Mentally forgive the person - the past is past
Identify what is most likely to motivate them to change
Say only the 10% that will actually change behaviour'
Keep your eyes on the prize
Lew deals with one of the big reasons that I think this community has trouble giving feedback - that often, the feedback feels subjective. If the situation is something like “This is wrong” or “this is off by 20%”, our training taught us to handle that. But usually the thing we’d like to offer feedback on is squishier.
Lew points out what happens when you put off feedback for too long:
And that’s when things explode. When you’ve waited too long to say something, and so your silence was internalized as tacit approval. The very behavior you were hoping to curtail has now been exacerbated, and has spread to the rest of the team.
and offer some very explicit steps to take (again, please read the whole article)
Relieve yourself of the pressure to be “purely objective”
Reference the work, not the person
Acknowledge the conditions that enable the behaviour
Paint a compelling picture of a path forward
Lew also has a couple of examples which are very helpful.
Managing Teams
Trust Lets You Observe Reality -
,Sometimes, ambiguity resulting from unclear communications is accidental. Sometimes, it’s intentional. One of the major reasons, especially as we become managers and leaders? People might not trust us.
Somewhere in your organization, you have a completely well-meaning, qualified, empathetic executive—maybe the CEO, perhaps the CFO, someone—screaming (quietly) to themselves, "What the hell is going on?"
It's obvious something is wrong—to everyone—but no one can seem to give them a straight answer on what it is (though everyone has a multitude of theories). And when that executive asks for someone, anyone, to take some sort of accountability for the state of things, they get bombarded with self-serving theories and hand-waving.
In our research lives, directly measuring physical reality was an option that was available to us, at least in principle.
That’s no longer an option in our current jobs. Any information we have about the organization and its work is mediated through other humans.
And if those humans don’t trust us? If people are worried we might throw them under the bus, or punish bad news, whether or not that worry is justified, what kind of information do you think we’re going to get?
Cutler points out (read the article) that:
Trust and psychological safety unlock your use of data.
and points out an important mechanism behind creating spurious processes (#139) or metrics (like individual productivity #172) - if people can’t get information they trust, they’ll do something to try to fix that;
The more dysfunction in the environment, the harder it is to understand what's going on. The more dysfunctional the environment gets and the more pressure there is to understand what's going on, the more likely it is that the approaches to understanding what's going on will be inadequate and flawed and may even make the situation worse. It is a wicked loop. […] "I'm done with nuance! Just give me a simple frickin' metric!"
This doesn’t apply just to executives - as soon as we have some kind of role power or influence, people will reasonably worry about how we wield it, and if they don’t trust us we won’t be able to observe reality except through a set of warped, rear-end-preserving lenses.
And it’s tough to fix this. As he says,
Objectivity does not cure low trust. In fact, in most cases, pursuing objectivity is a byproduct of having less trust in discussing nuance and context.
And indeed, as with Lew’s article - in ambiguous situations, “objectivity” is a myth, all you can do is gather as much information you can from as many perspectives as you can.
So it’s important for people to trust us. The only real path towards that trust is to be trustworthy, and to make sure at much of our organization as possible is trustworthy. One-on-ones help, but they has to be coupled with following through on things we say we’re going to do, and behaving in a way consistent with the behaviour we want to see (like not punishing bad news when its presented).
Managing Within Organizations
Leadership requires taking some risk - Will Larson
,One of the uncomfortable parts (for me, and for most of the STEM PhD managers I know) of dealing with ambiguity in efforts we lead is that we might make choices that could very easily be wrong. Not “sub-optimal”, but flat out wrong; maybe even laughably wrong, in the retelling afterwards.
Larson points out that past some point, taking on responsibly for maybe being wrong is just the job, unless you want to be told what to do all the time:
Indeed, if you want a bottom-up decision-making environment, but feel that taking on personal risk is an unreasonable demand, then maybe you actually don’t want a bottom-up decision-making environment.
I think this point comes earlier for our kinds of roles than for most. Typically we’re in some kind of specialty technical function, where very quickly as you go up the org chart, there are people who lack expertise in our field. Our judgement within our discipline is generally one of the reason we’re hired; we have to be making the decisions about the field, but also the people in the field.
Larson’s article (read it) discusses when you should (and maybe shouldn’t) take on risk yourself, and what owning that risk looks like.
Becks’ article offers a heads-up about making choices and leading projects; it’s in the context of software work, but I assure you that it’s applicable in any kind of change we might try to make:
Here’s the Sad Truth of [change] —it always gets worse before it gets better. We can ignore the Sad Truth, try to leap directly to the better[], but reality always gets a veto. Leaping creates risk.
This is especially important given this issue’s context of ambiguity, and nervous STEM PhD leaders making choices that might be wrong. If you try to make any nontrivial change in anything, something will initially get worse. That is not necessarily evidence that the change is bad (although anyone who opposed the change will 100% claim that it is). But it’s something that we have to be aware of.
Beck has a design skill he calls “succession” (read the article!) for planning the change with this in mind. Again, the context is software design, but any kind of change in our human systems benefits from this. When we are trying to deal with ambiguity in human systems, we don’t want ourselves (or others!) to be caught off guard by early negative signals.
It’s your fault they didn’t read your email - Greg Conklin
We can try to reduce ambiguity for our bosses and for others in the organization by clearly communicating. But that won’t matter if they don’t read or hear what we say.
My fellow STEM PhDs, you know I love you all, but many of us are absolute rubbish at clearly and succinctly conveying needed information.
This is not entirely our fault; it’s kind of how academia raised us.
Our class work was all about grading us for demonstrating how much we knew.
Getting papers published was not just about finding a great new result, but communicating that we were aware of all of the background literature with relevant context, and had thought of every possible counter to the argument we were presenting.
As a result, when we try to share a piece of information or present a recommendation, to higher ups or our team - we’re kind of a lot. People are busy! Sure, they could read all of that, or… they could just not. One of those is easier than the other.
Frankly, in a world with Copilot/ChatGPT/Claude/etc., this is eminently fixable - we all have a helpful editor in our browser we can iterate with to get something short and to the point.
But first we have to realize that doing that is our job.
From Conklin’s article (read the whole thing):
First, it’s on you to communicate well…
…not on the recipient to digest and internalize everything in their inbox. I occasionally hear leaders say, “Didn’t you read my email!? It had all that in there!” as if their emails are Brussel sprouts. “Eat your vegetables, children!” I mean… if you are serving Brussel spouts, you really should do it right.
A repeated theme of the Manager-Tools podcast is that “communication is what the listener does” (which is very consistent with basically the entire field of semiotics, fwiw). If the listener didn’t hear what we had to say, we didn’t communicate.
Conklin talks about this specifically in the context of two kinds of email - asking someone to do a thing, or getting someone to understand a thing. The article discusses structure and tips for each of those cases, with some nice examples. It’s definitely worth a read.
That’s It…
And that’s it for the week! I hope it was useful; Let me know what you thought, or if you have anything you’d like to share with me about how a newsletter or community about management for people like us might be even more valuable. Just email me, leave a comment, reply to this newsletter if you get it in your inbox, or schedule a quick Manager, Ph.D. reader input call.
Have a great weekend, and best of luck in the coming weeks with your team,
Jonathan