Building Apps in Real Time: My Experience with Claude Imagine

MEDIUM.COM

In last week’s newsletter I covered the Sonnet 4.5 release, Anthropics flagship model that hopes to regain its crown from GPT-5. There was something interesting lurking at the bottom of their release blog post, a bonus research preview, of “Imagine with Claude”.

This limited time demo, for Pro and Max subscribers, is a prompt-to-prototype tool. We already have these of course, with Bolt and Lovable being notable leaders. However, where other prototyping tools still have an embedded editor, Imagine entirely does away with the IDE. They are relying on the speed and power of their model to be able to create working software right before your eyes.

claude imagine

As Pawel notes in this post:

“The line between designer, developer, and user starts to blur. You’re simultaneously describing what you want, seeing it built, testing it, and refining it — all in the same conversation.”

Embracing the parallel coding agent lifestyle

SIMONWILLISON.NET

One of the key promises of AI Agents is that they can code autonomously while you occupy yourself with other tasks (either coding yourself, or perhaps kicking back with a cup of coffee!). Simon has been somewhat skeptical of this approach due to the need to carefully review the AI_generated code that and agent produces. Despite initial misgivings, he has found himself “quietly starting to embrace the parallel coding agent lifestyle”

In this post Simon shares the tools he is using and the types of task he is happy to delegate to an agent.

Vibe Engineering

SIMONWILLISON.NET

And another post from the rather prolific Simon Willison and one that particularly resonates with me.

We have a cool new phrase that describes the process of relying on AI exclusively to write, build, and fix our code - vibe coding. However, we lack a terminology to describe the process where we lean heavily on AI, but do still care deeply about the code that it generates.

In this post Simon suggests the term “vibe engineering”, but admits that he doesn’t really like it, and nor do I! However, he uses this post to enumerate all the practices that he feels are important to a “vibe engineer”, noting that these are all existing practices, but when AI-engineering, we rely on them even more.

This post triggered some of the most thoughtful discussions I’ve seen on Hacker News on the topic.

Two things LLM coding agents are still bad at

KIX.DEV

I’m going to give you the TL;DR; for this post, the two things are:

  1. It lacks the ability to cut and paste
  2. LLMs are terrible at asking questions

The first point relates to the way LLMs undertake edits, basically they emit code from “memory”, rather than physically move code around. I’m not that concerned about this issue, and as the author notes, more recent coding agents are starting to build this capability.

The second point is more interesting, when you instruct an LLM to undertake a task, it will do so without asking any further clarifying questions. It will do its best to fill in the blanks, which can result in some very poor outcomes. The standard technique for mitigating this issue is to spend lots of time prompt engineering, or creating AGENTS.md files, but it is impossible to know how much detail is needed.

I do think this is a pretty fundamental issues, and relates to one I noted a few months ago that LLMs don’t know what they don’t know.