Prompt Leakages: What You Repeat, Will Be Repeated, Even Outside of Patterns
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
If you’ve built any complex LLM system there’s a good chance that the model has consistently done something that you don’t want it to do. You might have been using GPT-4 or some other powerful, inflexible model, and so maybe you “solved” (or at least mitigated) this problem by writing a long list of what the model must and must not do. Maybe that had an effect, but depending on how tricky the problem is, it may have even made the problem worse. What gives?
There was a time, a long time ago (read: last week, things move fast) when I believed that the power of the pattern was absolute, and that LLMs were such powerful pattern completers that when predicting something they would only “look” in the areas of their prompt that corresponded to the part of the pattern they were completing. So if their handwritten prompt was something like this (repeated characters represent similar information):
Information:
AAAAAAAAAAA 1
BB 1
CCCC 1
Response:
DD 1
Information:
AAAAAAAAA 2
BBBBB 2
CCC 2
Response:
DD 2
Information:
AAAAAAAAAAAAAA 3
BBBB 3
CCCC 3
Response
← if it was currently here and the task is to produce something like DD 3
I thought it would be paying most attention to the information A2, B2, and C2, and especially the previous parts of the pattern, DD 1 and DD 2. If I had two or three of the examples like the first one, the only “reasonable” pattern continuation would be to write something with only Ds in it
But taking this abstract analogy further, I found the results were often more like
AADB
This made no sense to me. All the examples showed this prompt only including information D in the response, so why were A and B leaking? Following my prompting principle that “consistent behavior has a specific cause”, I searched the example responses for any trace of A or B in them. But there was nothing there.
This problem persisted for months in Augmentoolkit. Originally it took the form of the questions almost always including something like “according to the text”. I’d get questions like “What is x… according to the text?” All this, despite the fact that none of the example questions even had the word “text” in them. I kept getting As and Bs in my responses, despite the fact that all the examples only had D in them.
Originally this problem had been covered up with a “if you can’t fix it, feature it” approach. Including the name of the actual text in the context made the references to “the text” explicit: “What is x… according to Simple Sabotage, by the Office of Strategic Services?” That question is answerable by itself and makes more sense. But when multiple important users asked for a version that didn’t reference the text, my usage of the ‘Bolden Rule’ fell apart. I had to do something.
So at 3:30 AM, after a number of frustrating failed attempts at solving the problem, I tried something unorthodox. The “A” in my actual use case was coming from the chain of thought step, which referenced “the text” multiple times while analyzing it to brainstorm questions according to certain categories. It had to call the input something, after all. So I thought, “What if I just delete the chain of thought step?”
I tried it. I generated a small trial dataset. The result? No more “the text” in the questions. The actual questions were better and more varied, too. The next day, two separate people messaged me with cases of Augmentoolkit performing well — even better than it had on my test inputs. And I’m sure it wouldn’t have been close to that level of performance without the change.
There was a specific cause for this problem, but it had nothing to do with a faulty pattern: rather, the model was consistently drawing on information from the wrong part of the prompt. This wasn’t the pattern's fault: the model was using information in a way it shouldn’t have been. But the fix was still under the prompter’s control, because by removing the source of the erroneous information, the model was not “tempted” to use that information. In this way, telling the model not to do something probably makes it more likely to do that thing, if the model is not properly fine-tuned: you’re adding more instances of the problematic information, and the more of it that’s there, the more likely it is to leak. When “the text” was leaking in basically every question, the words “the text” appeared roughly 50 times in that prompt’s examples (in the chain of thought sections of the input). Clearly that information was leaking and influencing the generated questions, even if it was never used in the actual example questions themselves. This implies the existence of another prompting principle: models learn from the entire prompt, not just the part it’s currently completing. You can extend or modify this into two other forms: models are like people — you need to repeat things to them if you want them to do something; and if you repeat something in your prompt, regardless of where it is, the model is likely to draw on it. Together, these principles offer a plethora of new ways to fix up a misbehaving prompt (removing repeated extraneous information), or to induce new behavior in an existing one (adding it in multiple places).
There’s clearly more to model behavior than examples alone: though repetition offers less fine control, it’s also much easier to write. For a recent client project I was able to handle an entirely new requirement, even after my examples had been written, by repeating the instruction at the beginning of the prompt, the middle, and right at the end, near the user’s query. Between examples and repetition, the open-source prompter should have all the systematic tools they need to craft beautiful LLM instructions. And since these models, unlike OpenAI’s GPT models, are not overtrained, the prompter has more control over how it behaves: the “specific cause” of the “consistent behavior” is almost always within your context window, not the thing’s proprietary dataset.
Hopefully these prompting principles expand your prompt engineer’s toolkit! These were entirely learned from my experience building AI tools: they are not what you’ll find in any research paper, and as a result they probably won’t appear in basically any other AI blog. Still, discovering this sort of thing and applying it is fun, and sharing it is enjoyable. Augmentoolkit received some updates lately while I was implementing this change and others — now it has a Python script, a config file, API usage enabled, and more — so if you’ve used it before, but found it difficult to get started with, now’s a great time to jump back in. And of course, applying the principle that repetition influences behavior, don’t forget that I have a consulting practice specializing in Augmentoolkit and improving open model outputs :)
Also,
That’s all for this week. Have a good one and I’ll see you next time!