If You’re Going to Vibe Code, Why Not Do It in C?

STEPHENWARMSAY.NET

As someone who hasn’t touched C (or C++) in over 20 years, I can honestly say I haven’t asked myself this question!

Stephen, like many of us, is wrestling with the impact AI may have on the joy and satisfaction we experience as developers. A great many of us enjoy the craft of writing software and consider it both a career and a hobby. How will AI change this dynamic? Quite simply, does vibe coding take all the fun out of it.

However, the main thrust of this blog post is about programming languages themselves. Despite the fact that they are designed for machines to parse, they are designed for humans to understand also - some languages consider their human interpretability their primary feature (e.g. Ruby).

If you vibe code (in its strictest of sense, where you ignore the code completely), does it really matter what language the vibe coding tool emits?

Has the cost of building software just dropped 90%?

MARTINALDERSON.COM

Another blog post from a programming old-timer! (we’re a vocal bunch at the moment)

In this post Martin takes a step back and looks at the various innovations (cloud, open source) that have had an impact on the overall cost of developing software. He shares his feelings that we have unfortunately lost some of these benefits by creating over-complicated solutions. I couldn’t agree more.

The assertion that the cost has dropped by 90% is very hand-wavy and in my opinion rather optimistic. I’d put it a different way.

The cost of creating code has dropped significantly (to near zero), however, the cost of ‘shipping’ (to use Martin’ terminology), hasn’t dropped considerably yet. This is because the speed at which code can be written (or generated) is rarely the main limiting factor in a software project.

The “Confident Idiot” Problem: Why AI Needs Hard Rules, Not Vibe Checks

SUBSTACK.COM

The inherent over-confidence in LLMs is something I also wrote about earlier this year. This post looks at the problem from the perspective of developing AI agents.

Testing non-deterministic systems, i.e. AI agents, is hard. So how do we solve this problem? We ask an LLM to validate the agent’s response (LLM as a judge).

“We are trying to fix probability with more probability. That is a losing game.”

I very much agree, this layering of non-determinism atop of non-determinism doesn’t fill me with confidence. Especially as this technology has the habit of failing in surprising ways! When adversarial poetry is a viable attack vector we need a more robust approach.

Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

THEHACKERNEWS.COM

My initial response to this article is “only 30?”.

In all seriousness, any AI coding tool that has tool access (MCP), or the ability to write and run scripts, is inherently risky. The ony robust security model is to place a “human in the loop”, providing “least privilege” access to resources, validating steps and outputs.

But let’s face it, this significantly reduces the velocity as our slow human brain becomes a blocker for the AI. It’s an uncomfortable dynamic.

Claude CLI deleted my entire home directory! Wiped my whole mac

REDDIT.COM

Proof point incoming …

GPT 5.2 released

OPENAI.COM

Just four weeks after the release of Codex Max, Open AI have released another model, claiming the top spot across various benchmarks again.

gpt 5.2

As I noted previously, SWE-Bench, which has been the standard benchmark for evaluating model’s coding ability, has become saturated - models now achieve >80% scores. As a result, a newer and harder, benchmark has been created SWE-Bench Pro. Let’s see how long this one holds out!