AI & The Lost Joy of Finding Things Out
AI is rapidly changing software development, promising incredible gains in speed and efficiency. AI agents are getting really good at building the basic structure of code, handling some complex tasks automatically, and spitting out solutions almost instantly. But something keeps nagging at me….
I can't shake the feeling that this huge leap forward is also changing something fundamental about how software gets made. After a couple of successful experiments in building and deploying using AI myself, I've realised that something fundamental about software is being overlooked.
Software, as anyone who's worked with it knows, isn't a static, finished product. It's inherently fluid, more of a continuous process than a fixed entity. Beyond a landing page, software demands ongoing maintenance, regular updates to add new features or adapt to changing environments, and relentless bug fixing.
Successful software products are usually in use much longer than in development, making maintenance even more crucial. If software does not continue to adapt to changes in needs and environment, it becomes progressively less useful. Changes are inevitable, and occur for several reasons:
New requirements emerge when the software is used
The business environment changes
Faults must be repaired
New computers or equipment are added to the system
Growing user base makes the performance or reliability insufficient
Overview of Software Maintenance and Evolution, Jeff Offutt George Mason University
While AI can undoubtedly speed up initial development, I wonder if we're losing sight of the fact that software's long-term health and adaptability still heavily rely on experienced developers. These are the professionals who possess the deep understanding to navigate its intricacies, trace unexpected errors, and manage the complex interactions that inevitably emerge as software evolves over time. This inherent fluidity and the long lifecycle of software are key aspects that current AI-driven development approaches may be subtly undermining.
My perspective on this shift is not purely technical, as others like Namanyay Goel and Tim O’Reilly have extensively covered the mechanics of AI in software.
I'm concerned with a more nuanced change in experience.
Serendipity & Detours
This shift in software development parallels changes in other domains of learning, such as language acquisition. Just as AI-driven coding tools alter the nature of problem-solving, digital translation tools have transformed how we engage with language.
For generations, language acquisition involved the deliberate act of searching through a paper dictionary. I personally recall using a small notebook to hand-copy new words encountered in my dictionary. At the time, this was time-consuming and perhaps even tedious, but it proved extremely useful and inherently serendipitous. Looking up one word often led to the discovery of several others, sparking unexpected connections and broadening vocabulary in ways a direct digital search might miss. It’s not that it won’t work; it just feels like something is missing.
This "stumbling upon" knowledge, an outcome of the journey rather than merely the destination, is increasingly absent in our technology-driven world, particularly in the rapidly evolving field of AI-assisted software development. Think of it as a potential side effect or a phenomenon. I appreciate the tools and efficiencies AI provides, accepting this as a new way of operating. Yet, while acknowledging the boost in productivity, I cannot shake the feeling that alongside these gains, something valuable is being lost – the "joy of finding things out" through Serendipity & detours.
These 'detours'—the explorations, investigations, and even struggles—are, in my view, integral to the development process. Much like the unplanned discoveries in a paper dictionary, they were fertile ground for deeper learning and unexpected insights, potentially leading to novel interface interactions or a complete rethinking of a feature or project. However, in my own interactions with AI coding agents, especially in "vibe coding" or Chat-Oriented Programming (CHOP), I noticed a distinct shift in focus. The overriding thought became 'I need it to work!'—leading to endless back-and-forth messaging, fixated on a functional output, often at the expense of critical thinking and active problem-solving. The subtle danger I perceived was an uncritical attachment to the first workable idea, rather than a considered exploration of alternatives.
Acceleration has a price
This drive for immediate functionality is amplified by the pressures of today's fast-paced software industry, further incentivised by the viral promotion of AI tools on platforms like X and TikTok. "Vibe coding" outputs are often accepted simply because they eventually function, regardless of solution elegance or long-term robustness. Commentators have noted the increased debugging demands of AI-generated code and the complex maintenance implications of hastily implemented "make it work" solutions. Whether AI can effectively address these downstream challenges remains an open question. Moreover, much of the publicly demonstrated AI coding capability is currently limited to relatively straightforward software – Personal CRMs, basic applications, or even media editors. I completely accept AI as a tool for building personal software.
Dr Maria Panagiotidi, drawing on Lisanne Bainbridge's seminal work on the "Irony of Automation" [UX Psychology Substack], highlights how increased automation can shift the human role from active participant to passive monitor.
She argues that this can lead to a decline in vigilance, a slower response to unexpected failures, and—crucially—an erosion of manual and cognitive skills.
If developers become dependent on AI for instant solutions, what happens to their ability to think independently, navigate complex codebases unaided, or devise truly novel solutions? Even if we were to accept that AI could eventually generate complete and optimal codebases, a fundamental question remains: If AI can autonomously engage with existing software—and it’s starting to—where is the intrinsic value of building software for people? What role remains for humans beyond the initial request or prompt?
This potential deskilling effect contrasts intriguingly with fields like image generation, where AI tools such as Midjourney and DALL-E foster a different kind of creative engagement. While they require time, iterative refinement, and thoughtful prompting, they seem to cultivate new skill sets distinct from traditional artistic practices. Whether this constitutes a genuine creative craft or a loss of human artistic aura remains debatable.
Our tools shape us, but we can shape them too
Finding a balanced path forward is essential. The missing piece, as I see it, is the development of AI tools that truly augment human intellect in software development rather than diminish it. This means designing AI not just as a solver but as a collaborator—one that guides, teaches, and enhances human creativity rather than merely replacing manual effort. To preserve foundational skills, we may need intentional practices such as manual coding sprints, alongside a fundamental rethinking of software development education to ensure problem-solving remains central, even as AI proficiency grows.
Yet, achieving this balance may ultimately be a personal endeavour. Economic pressures, investor expectations, and industry momentum often prioritise speed and efficiency over long-term skill retention. The responsibility may fall on individual developers to maintain their craft—to deliberately foster the ability to think critically, solve problems creatively, and embrace the joy of finding things out.
The real challenge isn't just keeping pace with AI—it’s ensuring we don’t automate away the very skills that make us innovative in the first place.


