It’s been a long while since my last rambling heretical outpouring. Buckle up.
Compromise sucks. If you try to accommodate everyone’s demands you end up with The Homer. A lot of technology teams suffer from this in the way we try to address all concerns continuously, rather than focus on the few things that will genuinely make a difference. It’s entirely human nature to want to please others, and to avoid saying no.
Our versions of The Homer are out of date systems, half-deployed features, understaffed projects, and the like. All arise to some extent from the compromise that stems from trying to satisfy all demands, rather than being clear on our policy and forcing strict prioritisation of objectives.
Primarily, it asserts that the hallmark of good policy is that “it does exactly what it says on the tin” and has the following characteristics:
– It’s easy for everyone, whether they agree with you or disagree with you, to understand what it is you say you are doing.
– It’s easy for everyone to see whether or not you are, in fact, doing what you said you would do.
– It’s easy for you and your team to meet the goal of doing the thing that you said you would do.
That’s not a guarantee of political or policy success. Maybe you will pick terrible ideas and be a huge failure anyway. But this triad for success under conditions of distrust at least creates the possibility of success, where people will look back and decide that what you did worked. […]
And it will almost certainly involve a bit more high-handedness and less community consultation. But it allows you to establish yourself as conditionally trustworthy in the sense that your policies do exactly what it says on the tin. And if you pick policies that work, you’re then in a position to rebuild trust as people see that confidence in you is rewarded.[https://www.slowboring.com/p/making-policy-for-a-low-trust-world]
I reckon we could and should be more bold on some of our policies. An example of something we got excited about recently was “no work without measurement”. We haven’t forcefully implemented it, but what if we did? How about a policy of “we will not tolerate out of date systems” or “post-incident remediation must always be prioritised above all else”? Most companies already have these policies, but can you hand on heart say that you adhere to them accurately or consistently? Does it do what it says on the tin, or are you compromising to satisfy other goals, busily building your Homer Car?
We also should be unashamed to acknowledge our specific expertise. Because technology has become ubiquitous there’s a temptation to presume that everyone is an expert, or to compromise on technical expertise on the assumption that the non-technical requirements are more important or of higher value. Absolutely we need to look after our customers, stay in business, and turn a profit; but that’s not possible if we consistently demote technical issues as subordinate rather than integral to those goals.
There’s an analogy around urban planning and consultation: gathering community feedback about cycle lanes or street-scaping is not always useful because the people you’re consulting are not experts in the field, nor are they a coherent group with similar requirements:
The result of spending a lot of time gathering incoherent feedback from an unrepresentative group of people who don’t necessarily know what they are talking about is that you end up (slowly) doing things that either don’t work well or aren’t cost-effective. This leaves everyone feeling like the city is run by idiots and only further erodes trust.
By contrast, if you do something in a timely and cost-effective manner and it delivers beneficial results, then people might give you some credit and you’ll earn trust for the future.
Of course, if you bypass community consultation and do something that turns out to not work, you’ll look like an idiot. But if your departments of public works and transportation are staffed by people who don’t know what they’re doing, that’s the problem, not a lack of community trust. After all, the community shouldn’t trust you if you don’t know what you’re doing. The point is that there’s no magic process that makes this work. What you need is a team that is worthy of trust, then it needs to act like it deserves to be trusted, and then it needs to deliver something good.
Don’t deliver something over budget and behind schedule that turns out not to work well and then congratulate yourself on your community outreach.[https://www.slowboring.com/p/making-policy-for-a-low-trust-world]
What do you think? Could you be better at adhering to policies that “do what they say on the tin”, and risk a bit of stakeholder discomfort? Should we be more uncompromising when it comes to technical expertise in the interest of delivering safer, more cost-effective solutions?