Eight years of wanting, three months of building with AI
LALITM.COM
This is a really good blog post about Lalit’s experience of building and open sourcing syntaqlite; a parser, formatter, validator, and language server for SQLite SQL.
Despite the popularity of SQLite, Lalit felt the developer experience was poor, lacking formatters, linters and editor extensions for its own flavour of SQL. He has wanted to address this for a long time, but the challenge is hard and tedious. There is a lack of formal specification, it doesn’t have a stable API for its own parser, so tackling this problem involves quite a lot of code archeology.
Ultimately he followed a two-phase approach, the first was to fully vibe-code a solution (in C and Python). This created something functional in a short space of time, but close inspection of the underlying code revealed that it was a mess. It did however provide 500 test cases which were very valuable.
The second phase was AI-assisted, rewriting in Rust with a strict methodology. Ownership of architecture and design decisions, giving AI freedom, but with clear constraints. Thorough review and comprehensive validation / test suites.
A really good example of how to strike the right balance between moving at pace with AI, and the need for human oversight to ensure a quality outcome - for the long run.
Claude Code is unusable for complex engineering tasks with Feb updates
GITHUB.COM
In this detailed issue, raised on the Claude Code Github repository, a user describes their feeling that Claude has degraded over the past months to the point of being unusable. This was based on detailed analysis of their Claude usage logs. They also draw attention to a change in Claude release in February, where “thinking” content is no longer surfaced.
Comments from other users on this issue were mixed, but the prevailing sentiment does seem to be supportive of the regression claims, adding anecdotal evidence that it “feels dumber”. Someone from Anthropic did chime in, noting that the change in February just hides thinking, rather than suppressing it. In the meantime they have also introduced adaptive thinking and changed the default thinking effort.
This issue highlights a very tricky problem for both the tool-vendors and model labs, and the end users.
When a model “thinks”, via inference time compute, this is both expensive and slow. The models and tools will find a ‘sweet spot’ in their default settings that they think works well for most use cases, however, this is always going to be a compromise. It is also why concepts like “adaptive thinking”, where the model decides how long to think, have emerged.
As an end user, you have to be comfortable with an AI system whose behaviours are hard to define, and change without notice. Whatever you are building on top of these models of tools (whether processes or your own technology), keep them lightweight as they will almost certainly have to change over time.
The AI Great Leap Forward
GITHUB.IO
This is an interesting cautionary tale about the top-down push and race for AI adoption, and how it mirror’s China’s Great Leap Forward. The analogy works surprisingly well, and makes this quiet an engaging read. While focus is on the broader adoption of AI, and the push for everything to be agentic, it does detail a number of points that are relevant to those of us who are primarily concerned with using AI to build software.

The story describes the loss of engineering roles and functions, cutting QA, documentation and Ops. These are as “sparrows” being eliminated because “AI can do it.”. It also talks of a breakdown of engineering safeguards, with AI-generated tests that “validate its own assumptions”, and the erosion of tacit knowledge, where removing experienced engineers/managers loses system context
It paints a pretty bleak picture, which will not be universally true. However, I am sure that some organisations will make many of the mistakes outlined here in a singular pursuit of progress and velocity.
Is AI taking the fun out of software development?
SCOTTLOGIC.COM
And finally, please forgive a bit of self-promotion! I recently recorded a podcast with a couple of my colleagues talking about the more human impact of AI on software development. What does this mean for our jobs? our skills? these are certainly going to change in many ways - but a growing concern I hear from many experienced developers is that the software engineering job of the future, which takes them further away from the code, may not be something that really excites them.
Is AI taking the fun out of software development?