Gen AI could be the worst and the best of us, but it will be what we make it.
AI isn’t coming; it’s here. It’s everywhere. It’s in search engines and cars and for the most part we don’t worry or even think about it.
Then sometimes we do.
We worry for two totally legit reasons. First, not everything on the internet is true (I know, right?), so if that’s the reference library from which an AI figures out how the world works (and it often is), we may have a problem. Second, we know from what your maddest relative reposts on Facebook that algorithms tend to take you deeper into a rabbit hole rather than broadening your horizons. Which means tech can lead you astray even when it’s designed to be helpful.
My first encounter with ChatGPT was pretty eye-opening, and it serves as a cautionary tale in generative AI and bias.
Losing your generative AI virginity: assumptions and errors
The first time I used ChatGPT – in the heady days of late 2022, when the Liz Truss experiment had just come to an end, Kate Bush was back in the charts and suddenly everyone was talking about Open AI – my work bestie and I had the perfect task to try it on. My colleague had to write about themselves in the third person for a conference bio. No one likes doing that, do they? Sociopaths maybe?
Off we went: “I need a 150 word bio for a conference of project delivery professionals, I’m a qualified project management professional, some of my qualifications and jobs have included blah blah…” I should add: my colleague was smart, data protection savvy, and didn’t actually enter a name or personal details.
We quickly got back an impressive, elegantly written bio with a fixable US English style and just the teeny tiny issue that it had elected to use he/him pronouns throughout.
I’m guessing it had a look around the available data, concluded project managers were generally men therefore… It was a lesson in bias and the assumptions it leads to. And a lesson in how you can start out feeling a bit awkward about talking about yourself in the third person then be made to feel a fair bit worse because ChatGPT thinks a woman couldn't be that highly qualified in her chosen profession.
What did we learn?
AI is awesome, but the internet is a vast and varied place, with information ranging from the highly credible to utter bobbins. The bottom half of the internet is a dark and misleading space. AI models trained on such data can inadvertently pick up and perpetuate biases, leading to assumptions and errors like the one we encountered. We were right to worry.
The evolution and an important caveat
AI models like ChatGPT have come a very long way since then. Developers continually work to improve the training, making them more accurate and less biased. Try the same exercise today and it won’t make the same mistake. But we’re all essentially red-teaming* gen AI right now, on a global scale, and the biases present in the data will still leak out.
Be discerning. Know the limitations. Train your AI on good data. Give it good instructions.
The unbiased assessor: AI as the best of us
So why, you might ask, did I say out loud in a meeting the other day: “We should have an AI member of every assessment panel. It’d be brilliant.”
That’s because (unlike Liz Truss) AI has learned a lot since 2022, and so have I.
Give it the right information and the right instructions and AI can be the most unbiased Assessor Of Stuff. It can evaluate applicants or proposals based solely on the criteria set out in the specifications, without any regard for the individual's gender, social class, dress sense, or the fact that they have the same name as that girl who broke your heart. AI has a heart of stone. And apparently I’m into it.
Imagine having an AI on your interview panel. It would assess candidates purely on their qualifications and experience, free from any unconscious biases that human interviewers might bring. Looking nervous won’t matter unless you’ve explicitly said the job requires nerves of steel.
There’s some logistical issues with AI for interviews while the technology is still catching up with science fiction, but what about assessing bids? An AI can objectively compare proposals against requirements, making for a fairer evaluation process.
Don’t get carried away
At this point we will pause because we’re two beats away from a logical fallacy. You could extend the argument and say (red-faced and in the tone of voice of someone who thinks everything is probably the thin end of some wedge or other), “Why not let AI do the whole assessment solo and just pick the winner? Eh? EH?! Better yet, let it look online and just find the right person/company/app to bring in and we’ll all just go home?! Pah!”
We won't. Because that would be dumb.
It would be dumb because it brings back the bias. Dumb because it means you would have to have work to get work and start-ups won’t get a look in, dumb because it’s a total failure to recognise the value of (human) professional judgement based on expertise and lived experience. It’s dumb because you’re smart, so you should still make the decisions you're accountable for, not AI.
Let's be responsible out there.
* Red-teaming, as lots of you will know from cyber security and – increasingly – the over-bureaucratised world of project planning, is when you have critical friends or hackers to test your system/plan/solution thoroughly, helping you prepare for and prevent real attacks. Like tire-kicking, but useful. And, if you like to play with things until they break, it’s also excellent fun.
Engine builds private AI tools that let you keep things simple; private GPTs trained on content you curate, giving you answers your teams can rely on, and starting you on an AI change programme you can manage. We charge a one-off, affordable build cost and a single, fixed monthly cost for your whole organisation. Email us at info@engine-ai.co.uk.
Explore our collection of 200+ Premium Webflow Templates