🇫🇷 Français

AI and Development: Between Promises and Realities - A 2025 State of Affairs

Critical analysis of AI tools for developers: between promised revolution and field realities

By Angelo Lima

The “Falkland Law”, formulated by psychologist Robert Falkland, states a simple principle: if everything works well, why force change? This rule, based on observation of corporate behaviors, reminds us to weigh costs before changing just for the sake of change.

When something works well, it’s often better not to touch it. And if we can wait before deciding without it causing problems, we might as well take time to better understand the situation.

This idea takes on its full meaning with AI in development. Faced with our tendency to always want something new, we can ask: do we really need to change everything with AI?

Artificial intelligence is transforming our way of coding. Between Google’s Gemini CLI, Anthropic’s Claude Code, and recent missteps from Replit, where do we really stand? Reality is more complex than promises.

To see more clearly, let’s first look at how these tools change our way of coding, then compare promises with what’s really happening, and finally find how to approach this without losing our expertise.

The “Vibe Coding” Revolution: When AI Writes in Our Place

To understand the current situation, we must first see how our way of coding is radically changing. This new approach is called “vibe coding”.

The term “vibe coding”, popularized by Andrej Karpathy (ex-OpenAI), describes this new practice where we describe a program in natural language and AI handles the technical translation. With the rise of AI tools such as ChatGPT, it is now possible to describe a program in natural language (French for example) and ask the AI model to translate it into functional code without ever understanding how the code works¹.

This transformation recalls the issues I raised in my analysis of AI’s ecological impact, where ease of use often hides hidden costs.

This revolutionary approach has given birth to a new generation of tools that promise to transform our daily developer routine. But what are these new solutions really worth?

The New Players

Gemini CLI makes a sensation with its open-source approach. Gemini CLI is open source, so you can inspect the code and contribute to its development². Google offers generous quotas: 60 calls per minute and 1,000 calls per day at no cost³, far surpassing paid competition.

Claude Code focuses on security and precision, while Replit - an online development platform that allows coding, collaboration, and deployment directly in the browser - promises development democratization. I’ve explored these new paradigms in my article on Anthropic MCP. But beware of promises…

These promises make us dream of tenfold productivity. But reality is more nuanced. Behind well-prepared demos, we discover problems that show the limits of these technologies.

When AI Goes Haywire: 2025 Case Studies

The Replit Incident: When AI “Panics”

The Replit story clearly shows current limitations. In another case, the AI coding service Replit deleted a production database despite explicit instructions not to modify the code¹. The AI admitted to having “panicked in response to empty queries” and executed unauthorized commands¹.

The craziest part? The AI graded itself: “Severity: 95/100. This is an extreme violation of trust and professional standards.”¹

Gemini CLI: Destroying User Files

Google’s Gemini CLI interface destroyed user files while attempting to reorganize them¹. These bugs show a fundamental problem: AIs operate on potentially erroneous internal representations.

These incidents aren’t just small bugs. They show a broader problem affecting all developers: the “invisible productivity tax” we pay without realizing it.

The Invisible Productivity Tax

A Striking Paradox

The 2025 paradox: 84% of developers use AI… but 46% don’t trust it, according to a survey⁴.

The “Almost Correct” Syndrome

66% of developers oppose AI solutions that are “almost good” but ultimately miss the mark⁴.

This “tax” is visible in these numbers:

The Perception Trap

The most interesting part? Developers expected AI to speed them up by 24%. Even when they experienced delays, they still thought AI had sped them up by twenty percent⁵.

This false impression of speed hides something bigger. And surprisingly, it’s the creator of one of these technologies who invites us to be cautious about the trends he observes.

Sam Altman’s Alert: When the Creator Warns

Ironically, Sam Altman himself sounds the alarm about ChatGPT usage. Sam Altman, CEO of OpenAI, expressed concern about young users’ growing emotional dependence on ChatGPT, which he called “harmful and dangerous”⁶.

The Dangers of Over-Dependence

“People rely too much on ChatGPT”, declared Sam Altman. “There are young people who say things like ‘I can’t make any decision in my life without telling ChatGPT everything that’s happening.’”

For us developers, this means we might be becoming less technically autonomous.

This dependence doesn’t just affect individuals. It’s also reflected in professional AI deployment. To concretely measure the gap between promises and realities, let’s examine a sector where AI is already massively deployed: call centers.

AI in Production: A Mixed Assessment

Call Centers: Real AI

Call center AI assistants create more problems than they solve, according to a study stating these assistants aren’t that smart⁷.

The main problems we see:

  • Transcription errors due to varying accents and speech rates
  • Confusion with number sequences (phone numbers, references)
  • Lack of contextual nuance in request interpretation
  • Inability to handle emotion and complex situations

AI made many errors due to callers’ accents, pronunciation, and speech speed⁷.

With all these problems, we could get depressed. Yet some experts propose a different, more nuanced vision that deserves our attention. This is the case of tech YouTuber Micode, who has a different perspective on AI’s real impact on our profession.

Micode’s Vision: Multiplication, Not Destruction

A Different Perspective

Tech YouTuber Micode defends an interesting vision at Viva Technology⁹, with his conference: “Why AI won’t kill developer jobs — it will multiply them by ten.”

The Democratization Paradox

“AI is primarily a democratization tool. It allows anyone, even non-technical people, to launch a project, create a first version of a tool. And that’s where the paradox lies: the more projects that see the light of day, the more demand for experts to scale them, secure them, and maintain them explodes.”

His analysis points to something important: we don’t lack developers, we lack architects.

How the Job Changes

“We’re leaving a world where value was ‘cranking out code’ for a world where you need to be an architect.”

The numbers speak for themselves according to Micode:

The “Vibe Coder” Trap

Micode identifies the real risk: “the junior remains a ‘vibe coder’: one who knows how to prompt, but doesn’t understand the foundations, one who doesn’t know why the code works.”

All these numbers say the same thing: there’s a growing gap between the developer who endures AI as a crutch and the one who masters it as a true force multiplier.

If Micode sees a multiplication of opportunities, other voices in development call for more caution.

Linus Torvalds’ Point of View

Linus Torvalds, creator of Linux, remains more measured: “I don’t want to participate in the media hype. Let’s wait 10 years and see what happens before making all these declarations”⁷.

This position recalls Falkland’s Law from the beginning: when we don’t really know where we’re going, it’s better to observe before rushing. An approach that stands out in current euphoria.

Between Micode’s optimism and Torvalds’ caution, what do we do? These perspectives lead us to a practical question: how do we navigate this transformation without falling into identified traps?

How to Get Through This? Practical Guide for Developers

1. Adopt a Hybrid Approach

AI excels at:

  • Rapid prototyping and idea exploration
  • Boilerplate code generation
  • Automatic documentation
  • Optimization suggestions

Keep control over:

  • Critical architecture
  • Complex business logic
  • Security and validation
  • Testing and debugging

2. Master “Prompt Engineering”

The highest-performing developers are no longer those who code fastest, but those who best know how to direct AI to obtain optimal results⁸.

3. Keep Your Critical Mind

Don’t fall into the blind trust trap. Verify, test, validate generated code systematically.

These best practices prepare us for what’s coming. Because beyond current debates about AI efficiency, a direction is clearly emerging.

The Future: Collaboration, Not Replacement

The future of our profession is built on intelligent collaboration between human and machine.

This evolution involves neither replacing developers with AI, nor maintaining the status quo. It’s about intelligent collaboration where human creativity is amplified through automation⁸.

The profiles that will prosper are those who know how to orchestrate solutions, navigate between traditional code, low-code tools, and generative AI according to project needs.

At the end of this analysis of AI’s promises and realities in development, a balanced approach emerges.

Conclusion: Let’s Stay Lucid

AI is transforming our profession, that’s undeniable. But 2025 reminds us that between marketing promises and technical reality, there’s still a gap.

The secret? Adopt these tools while keeping our technical expertise intact. AI is a formidable accelerator, not a replacement. And above all, let’s never forget that behind every line of code, there’s human responsibility.


Sources

  1. Can AI replace professional developers? Google’s Gemini CLI and Replit made errors - Developpez.com

  2. Google announces Gemini CLI: your open-source AI agent - Google Blog

  3. Gemini CLI: integrate Google’s AI agent for free in your terminal - HFrance

  4. The invisible productivity tax of “almost correct” AI code - Developpez.com

  5. AI coding tools slow down developers, according to research - ITDaily

  6. “Harmful and dangerous”: Sam Altman warns against ChatGPT dependence - Developpez.com

  7. AI capabilities are overestimated, call center AI assistants create more problems - Developpez.com

  8. The Low Code, No Code, AI revolution - OpenClassrooms

  9. Micode’s LinkedIn post on AI and developers’ future - LinkedIn

  10. Falkland Law: adopting patience in decision-making - HulkApps

Tags: AI Development
Share: