TL;DR • Passive trust often creeps in when human oversight becomes routine and passive. • Regular...
25 years later: Good intentions still don't fix bad systems
This article is a bit long-winded, but there is a point. Bear with me.
This week marks 25 years of working in corporate(ish) environments. I had kinda forgotten about it.
My wife and I went on a cruise for our honeymoon. As soon as we got into our cabin, we crashed for a nap. When we woke up, we were already at sea. Back then, mobile coverage was largely non-existent in deep water, so we were effectively off the grid.
We were both finishing our studies; neither of us had permanent employment yet. I was a little nervous. As soon as we were back on dry land, I turned my phone on. Among the voicemails was one from the person who gave me my first job. Phew. I had a job. Timing is everything. And I am grateful to Cheryl to this day.
So our recent 25th wedding anniversary reminded me that I would soon be hitting the work milestone. And, of course, I started thinking about all the work environments I’ve been in, and many of the people I worked for and with. That made me wonder: can I find a pattern to distinguish between the good leaders and the not-so-good ones?
I haven’t found it quite yet. Importantly, it’s not about demographics. Most of my leaders were female, and I’ve had both good and bad ones. I’ve had fewer bad male managers, in absolute terms, but that’s because there were fewer males overall. The ratios are very similar. So it’s not gender. Age certainly didn’t play a role. Neither did race. Or language. I can’t find a demographic angle.
This isn’t surprising. With some exceptions, we should be surprised if we see decisions that are driven (in whole, or in part) by demographics that play no valid role in those decisions. So, irrelevant demographics out. Behaviour, squarely in. We have some work to do on that front, but the intent is clear.
But while we humans are slowly getting better at ignoring the stuff that shouldn't matter, like where someone was born or what their skin colour is, our systems are not there yet.
Models don't care about intent. They just hunt for patterns. If we aren't careful, they find proxies for the very things we’ve ruled out. A model might see "postcode" and treat it just like "race," sneaking the bias right back in through a side door. Of course, there are potentially some valid uses of the postcode variable, like pricing for theft risk in insurance (if theft is higher in some areas than others, and we have a documented risk assessment, etc.).
In my 25 years of working, I’ve learned that good leaders ask questions, and often these are hard questions. We ask hard questions of our algorithms, too. We don't just assume that our good intent makes it into our code. We check that the system matches our intent: driving outcomes that are fair and effective, using only the data that matters.
Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.