"Use AI" is answering the wrong question
We are certainly living in interesting times right now. Specifically here I am talking about the rise of "Artificial Intelligence" (or, more correctly, 'Large Language Models", AKA 'LLMs') in software development. Whether you believe these tools are useful, or will lead to the demise of the human race¹, there is no doubt they are surrounded by the biggest hype cycle that anyone has seen before. There is an unprecedented (many are now saying unsustainable) amount of investment happening in this area. But any investment requires a return which means marketing and sales are working overtime to bring in new paying customers.
Unfortunately as a result of these intensive sales campaigns there are companies - many of whom should know better - becoming ensnared in the unfulfilled promises being made by unscrupulous companies, and are "drinking the Kool Aid". However, once the contracts are agreed by senior management, and the (not insignificant) bills being paid, reality sets in with the companies buying the promises sold to them, and once again there has to be a ROI but this time for the customer. So what happens next? The edicts. How many of these phrases sound familiar?
"We are going all-in on AI"
"Use AI"
"Everyone must use this integrated AI tool"
"Don't talk to your colleagues/subject matter experts any more, ask the AI"
Management becomes desperate to find a use for the new, expensive technology they have been convinced to buy, often without due diligence being carried out to ensure it really does what was intended. And of course many will cheer it on because who doesn't like a bit of new and shiny, whether it's needed or not? But there is one huge problem here:
"USE AI!" is answering to the wrong question
"All in on AI" edicts are answering the question "How do we justify spending so much on this technology?". It's classic sunk cost fallacy. "We have spent unwisely, so now we need to justify it" (often with a subtext of "My bonus depends on it"). If the technology was genuinely valuable, people would naturally gravitate towards it; as soon as there needs to be an edict to use something, it is obvious that it is a proverbial white elephant. Or worse, playing fast and loose with the analogies here, an albatross tied around the company's neck.
So what is the real question companies need to be asking? What should senior management really be focussed on? My best suggestion would be:
"How can we deliver value better?"
That's it. The game is all about delivering value better. What's value? What is 'better'? Well, that's down to you to decide, but here's a clue: what does whoever buys your product want? Yes, they are tough questions many senior management would like to avoid. But here's the rub:
What if I told you that there are techniques and approaches that would make your teams far more productive than any LLM tool in delivering said value? And that teams can learn to use these at a fraction of the cost you are being charged for LLM licensing? That does sound like a compelling idea, doesn't it? It is also 100% true. It is perfectly possible to deliver software reliably, sooner and with higher quality than most other companies without needing LLM "help". In many cases the LLM has been shown to get in the way, actively dragging down quality, speed and maintainability (my own suspicions confirmed).
By applying proven engineering approaches you can actually outstrip and outperform LLMs when creating software. Further still, once you have mastered these engineering approaches without LLMs, you will have a sensible benchmark to judge the results of LLM-assisted development against, and use it where it does help.
LLMs are probably not worth the extortionate amount of licensing money, let alone the environmental impact, or the human impact on the low-paid folks who train them. When compared to teams practiced in high quality engineering approaches, LLMs look somewhat of a liability at best. Once realised, the budget can be redirected to where it should be focussed - training software developers in high performing development approaches.
~~~~
Interested in learning more about how your teams can outperform LLM-aided code? Get in touch! If I can't help, I do know plenty of other trainers who can help
¹ Personally, I believe they are neither particularly useful yet, nor a direct threat. Skynet Terminators won't happen. But they are yet another threat to this planet's climate in their current form. Not to mention a threat to the world economy due to the circular nature of the investments.
Comments
Post a Comment
Comments on this blog are moderated. Intelligent comments will be published, but anything spammy or vexatious will be ignored.