justing.net

Prompts for the bots

By Justin G. on

Working with the bots

So, you’re working with an LLM chat bot more? Me too. Here’s what I know, circa Q1 ‘26. There are some patterns that I have found are most effective. Remember, always think critically about what you read in your chats, just as you would with any other correspondent or resource.

The mental model is of a hyperactive intern that doesn’t have the full context you do and doesn’t let it stop them. Interns, being very junior employees, also don’t know when they should object and point out something obvious that you missed. You have to explicitly ask for that. So it’s about exposing the mismatch in assumptions by having it create an artifact at each step that you review and correct. This makes the communication explicit and bidirectional, and exposes where you haven’t realized what you said was interpreted to mean something you didn’t intend. Just so it’s clear, that senior/junior employee analogy cuts both ways. Some seniors are better at directing juniors than others and this artifact based communications is attempting to address the limitations of both the human and the bot.

The trouble is that you can’t always for every query or interaction deploy APE or IMe, some don’t require it at all. We require that both the bot and ourselves use judgement about when to get the most out of these techniques. I use my intuition about the costs of getting it wrong to decide how much of this scaffolding to apply. And by cost, I really mean time- you can’t get that back. I find that I use IMe a lot because often my questions are subtle (i.e. the context difficult to share) and require nuance to get precisely what I want. Larger projects, appropriate for APE, are also easy to know. In the “don’t do either” category are things that you just prompt for again but different.

These posts resonate:

Using Agents

Originally I had these in the reverse order, but they build on each other. When I am doing an APE pattern, I am also hierarchically doing many or all of the other techniques too where I recognize that they will increase the chances of a better output.

Basic Habits

Be as explicit as you can be about what you are either trying to do, achieve, or solve. This usually means it’s worth spending a minute to think and then write more words about it, after your initial prompt is written but before you hit send. This does slow you down a bit, so you want to make sure you first capture quickly what thought by blurting it out in the little text box. Pause, though, friend. It’s worth thinking a bit more.

Slow is smooth. Smooth is fast.

Good prompts should include examples of what you want (e.g. the csv columns exactly as you want them), or specifically what you do not want (e.g. I don’t want a list of pithy non-sequiturs- make it concrete).

Ask it to list alternatives or “what’s wrong with this approach”. Remember that these agents have a massively huge diverse reservoir of knowledge, they know things that you don’t know you don’t know. Our goal is to leverage that. You are trying to find what you don’t know, so make sure there is an opportunity for that to enter. There are some other versions of this, like “steel man the counter argument” and “what are the obvious criticisms of my thinking here”, etc.

In longer threads, restate the goal. It can get lost, probably by both of you. This is a technique that I underutilize.

The IMe pattern: Interview Me

The Interview Me technique is how you get the agent to “help me help you”, it draws more out of you to be explicit about what you are asking for. Get the bot to ask you more questions about what you are asking, so both of you are more clear on the objective.

Interview me about this to get a fuller context
of what I mean and need from you.
Ask me 3+ questions to clarify the context and
understand what I want here.

The APE pattern: analyze, plan, execute

The principles of this are useful for any larger project. Developing a new code module or modifying an existing code base. Researching and writing an extensive report.

The underlying principle here is that constraining the output to be a fixed object (a table of tradeoffs, a list of assumptions, or a proposed code diff) forces us to see the consequence of the earlier prompt(s). This is the same technique as “writing is thinking”, when we make it real, external, and inspectable, we can see what we really meant and the implications. We can make it better.

  1. Analyze aka research only, capture learnings in research.md. Use phrases like “understand deeply”. You, the human, now reads that plan and makes corrections. Tell the bot to revise the analysis based on your corrections. Repeat that until it’s right to your understanding.
  2. Plan only, capture in plan.md with examples, checklists, and explicit directives. Review and correct the plan, and have the bot revise the plan too. Repeat that until you are convinced the plan is right.
  3. Execute aka build phases only. Tell the bot to not stop until all (or just the phase) of the plan is done. Tell the bot to use the red/green pattern. Tell the bot “Do not deviate from the plan.” Mark complete as you go in the plan.md file.

To Reply: Email me your best llm tips.

Posted: in Notes.

Other categories: none.

Back references: none.

Tags that connect: [[ai]] New to me Facts and Ideas in June 2024, Borges and AI; .

Tags only on this post: llms.