Turing-Checkable Generations

I think current agentic programming leaves out a useful pattern that “good old fashioned prompt chaining” had.

Certain generations can be checked for correctness by code. A trivial example is JSON generation.

When you ask a model to generate JSON:

  • sometimes it does it
  • sometimes it wraps the JSON in code fences
  • sometimes it wraps the JSON in an object
  • sometimes it rambles before or after the JSON output.

If we attach the generation to a JSON validator, we can respond to parse errors by further prompting the LLM something like:

“great, but your output doesn’t parse. Rewrite your output so it will parse”

The resulting generation process is then:

  1. Generate
  2. Apply code-based work checking
  3. Repeat/revise as necessary, in a loop

It’s not a universally applicable pattern, but it’s one that I use all over my own side hacking, and I struggle to integrate cleanly into agents without resorting to more rigid workflows.

© 2025 Edward Benson. All rights reserved.