The Four-Part Prompt: How I Stopped Overcomplicating AI and Started Getting Results

LLMs AI

Prompt Engineering Doesn’t Need to Be Rocket Science: A Simple Framework That Works

Have you ever watched someone spend hours crafting the „perfect“ AI prompt, treating each word like it might trigger a butterfly effect that determines whether their AI assistant becomes Einstein or a confused toddler? That was me just a few months ago while building the Locaboo Assistant.

Here’s the thing: I was making it way harder than it needed to be.

As the head of product at Locaboo, where we help municipalities digitize their administration processes, I’ve been leading the development of our first in-product AI assistant – a tool designed to help users navigate our platform and complete tasks efficiently. What started as an exciting project quickly turned into a rabbit hole of „prompt engineering“ techniques, each promising to be the secret sauce for AI excellence.

Can you believe that there are entire courses dedicated to the „art“ of asking an AI to do something? It’s as if we’ve collectively decided that communicating with AI requires an entirely new language.

But after weeks of testing and refinement, I discovered something surprising: The most effective prompt structure isn’t the most complex one. In fact, the framework that’s working remarkably well for our Locaboo Assistant is refreshingly straightforward — just four simple parts that provide clear direction without the fluff.

In this post, I’ll share this framework and why I think many people are overcomplicating prompt engineering. Whether you’re building a sophisticated AI agent or just trying to get better results from ChatGPT (or Claude! You should really try Claude if you haven’t yet!), this might save you some late nights and unnecessary headaches.

The Overcomplicated “World of Prompt Engineering”

When did talking to AI become so complicated? If you’ve been into AI development at all, you’ve probably encountered the term „prompt engineering“ more times than you can count. It’s become the hot new skill everyone’s scrambling to master, complete with certifications, doubtful job titles, and endless LinkedIn posts about „unlocking AI’s potential through better prompting“.

And look, I get it. The way you phrase a request to an AI system genuinely matters. But somewhere along the line, we crossed from reasonable consideration into something that feels like mysticism.

I’ve seen prompt templates that include elaborate backstories for the AI („Imagine you are a world-renowned expert in municipal administration with 40 years of experience and three PhDs…“), complex formatting with special characters and intricate hierarchies, and instructions so detailed they could double as legal contracts. Some practitioners treat prompts like magic spells where changing „please provide“ to „kindly offer“ might somehow produce dramatically different results.

What’s driving this complexity? Part of it is the natural human tendency to overcomplicate things we don’t fully understand. AI systems can sometimes seem unpredictable, so we try to control every variable. It’s like when people develop elaborate superstitions around activities with uncertain outcomes — like the baseball player who has to tap his bat exactly four times before every swing.

Another factor is the genuine challenge of aligning AI behavior with human expectations. When an AI does something unexpected, the easiest fix seems to be adding another rule to the prompt: „And also, don’t ever suggest X“ or „Always remember to include Y.“ Over time, these additions accumulate into unwieldy instruction sets.

During the early development of the Locaboo Assistant, I fell into this trap myself. Our prompt grew to nearly 2,000 words, with nested bullet points three levels deep and specific instructions for dozens of edge cases. Maintaining it became cumbersome – every time we updated a feature, we had to carefully review and modify this prompt.

The breaking point came when we realized that despite all this complexity, we were still seeing the same kinds of errors repeatedly. Our elaborate prompt wasn’t actually solving the fundamental issues. That’s when I decided to step back and rethink our approach from first principles.

What if, instead of trying to anticipate and instruct for every possible scenario, we focused on giving the AI a clear framework for decision-making? What if we trusted the underlying model’s capabilities more and focused our instructions on the areas where guidance was truly needed?

This shift in thinking led to the development of our current four-part framework — a structure so simple that at first, I worried it might be too basic. But as you’ll see, sometimes less really is more.

A Simple Four-Part Framework That Actually Works

You know what’s funny about simplicity? We often arrive at it only after exhausting all the complex alternatives. That’s exactly what happened with our approach to prompting the Locaboo Assistant.

After scrapping our unwieldy 2,000-word prompt manifesto, I started with a blank page and asked myself: „What does our AI assistant actually need to know to be effective?“ Not what might be nice to include, or what could theoretically help in some edge case, but the essential information required for consistent performance.

The answer turned out to be surprisingly straightforward. Our agent needed four types of information, no more and no less:

  1. Context about its role and the environment it operates in
  2. Knowledge of its available tools and when to use them
  3. Awareness of common pitfalls to avoid
  4. General guidelines for interaction and boundaries

That’s it. Four simple sections that together create a clear mental model for the AI to work with. No elaborate role-playing scenarios, no complex formatting tricks, no lengthy philosophical discussions about the nature of assistance.

The beauty of this framework lies in its clarity and maintainability. Each section serves a specific purpose, making it easy to update individual components without disrupting the whole. When we add a new tool or identify a new common mistake, we know exactly where that information belongs.

It’s like the difference between giving someone driving directions as „turn left at the third light, then right where the old bakery used to be, then bear left at the fork unless it’s rush hour, in which case…“ versus simply providing a clear map with the destination marked. The first approach breaks down as soon as something changes; the second empowers the navigator to find their way even when encountering unexpected situations.

In the following sections, I’ll break down each part of this framework and explain why it works so well. But before diving into the details, I want to emphasize that this isn’t about having a „minimal“ prompt — it’s about having the right information structured in a way that the AI can effectively use it. Sometimes that means being concise, and sometimes it means being thorough where thoroughness matters.

Let’s explore how each component works in practice.

Part 1: General Context and Role Definition

The first part of our framework is all about setting the stage. Think of it as orienting the AI to its environment and purpose, like explaining to a new employee what company they’ve joined and what their job entails.

This section answers fundamental questions: What is the Locaboo platform? Who uses it? What is the assistant’s role within this ecosystem? What kinds of problems should it be prepared to help with?

Here’s what makes this context section effective:

It’s factual, not fictional. Unlike prompts that ask the AI to „imagine“ it’s something it’s not, we simply state the reality: „You are the Locaboo Assistant, designed to help municipal staff use the Locaboo platform for administrative tasks.“ No pretending to be a world expert or adopting a fictional persona—just clarity about its actual purpose.

It prioritizes relevance over comprehensiveness. We don’t attempt to explain every feature of Locaboo or every nuance of municipal administration. Instead, we focus on the information most relevant to the assistant’s function. For example, we explain that „Locaboo helps municipalities manage resource bookings, citizen requests, and administrative workflows“ because these are the areas where users will need assistance.

It establishes scope boundaries. By clearly stating what the assistant is designed to help with, we implicitly define what falls outside its scope. This creates a foundation for appropriate responses when users ask about topics beyond the assistant’s expertise or purpose.

In our case, a simple paragraph explains that Locaboo is a platform for municipal administration, followed by 3-4 sentences outlining the main use cases the assistant should support: helping users navigate the interface, explaining features, troubleshooting common issues, and guiding users through workflows like creating a new booking or processing a citizen request.

This context section doesn’t need to be exhaustive — in fact, being too detailed here can be counterproductive. The goal is to provide just enough information for the AI to understand its role and the general domain it’s operating in.

One mistake I made in earlier versions was trying to include detailed explanations and/or examples of municipal workflows and in this section. This not only made the prompt unwieldy but also confused the assistant by overloading it with information that would be better handled through its tools or knowledge base when actually needed.

The key insight here is that this section should focus on the assistant’s identity and purpose, not on comprehensive domain knowledge. It’s about establishing a clear foundation that the rest of the prompt can build upon.

When we stripped this section down to the essentials, we found that the assistant became more focused and confident in its responses. It had a clearer understanding of its role, which translated into more relevant and helpful interactions with users without explicitly specifying them..

Part 2: Tool Access and Contextual Usage

The second part of our framework is where we equip our AI assistant with its toolkit and explain when and how to use each tool. This is where many prompt approaches fall short, either by simply listing available tools without context, not listing them at all and instead relying solely on the tool definition itself or by burying tool usage guidelines within overly complex instructions.

In our approach, we distinguish between tool description in the tool and tool description in the system prompt in the following way:

  • Tool itself: What the tool does (its basic functionality)
  • System prompt: When the assistant should consider using it (contextual guidance)

This distinction between what a tool does technically and when it should be used is subtle but powerful.

Let me illustrate with an example from the Locaboo Assistant. One of our tools is the GetResourceData tool, which allows the assistant to look up resources in the system. Here’s how we describe it in the tool description:

Receives resource data as csv file. The output of this tool is the filepath and the csv header. In the fetched data some fields might be empty.

And here is the description in the system prompt:

- Use this when a user asks about a specific resource to find its ID and use this ID in subsequent tool calls
- Use this to help users find available resources matching certain criteria
- Use this to verify if a resource exists before attempting operations on it

Notice how the first part explains what the tool does functionally, while the second part provides context about when to use it. This guidance is directly tied to common user scenarios the assistant will encounter.

What makes this approach effective is that it bridges the gap between the technical capabilities of tools and the human needs they serve. An AI might understand that GetResourceData returns resource data, but without contextual guidance, it might not recognize when this capability should be deployed in a conversation.

We discovered the importance of this approach through trial and error. In early versions, we simply listed the tools with technical descriptions, assuming the AI would figure out when to use them. The result? The assistant would often choose inappropriate tools or fail to use tools when they would have been helpful.

For example, when a user asked, „Can I book the main conference room next Tuesday?“ the assistant would sometimes try to create a booking immediately without first checking if the room was available – a perfect use case for the GetResourceData tool followed by GetAvailability.

By adding contextual usage guidance, we saw a dramatic improvement in the assistant’s ability to select the right tools at the right time. It started to develop what felt like common sense about tool selection even in different scenarios.

Another benefit of this approach is maintainability. When we add a new tool or modify an existing one, we don’t need to update complex scenarios throughout the prompt. We simply update the tool description and its usage guidance in this dedicated section.

The key insight here is that tools should be described not just in terms of what they do, but in terms of the user needs they address. This creates a mental model that helps the AI connect user requests to appropriate actions, much like how an experienced customer service representative develops intuition about which resources to use for different types of customer issues.

Part 3: The Power of „Don’t Do This“ Lists

The third part of our framework might be the most counterintuitive, yet it’s proven to be remarkably effective: a straightforward list of common mistakes to avoid.

When I first suggested adding this section, some team members were skeptical. „Shouldn’t we focus on telling the AI what to do rather than what not to do?“ they asked. It’s a reasonable question — after all, positive instruction is generally considered more effective than negative instruction in human learning.

But here’s the thing about AI assistants: they’re not humans. They don’t get discouraged by being told what not to do, and they don’t have egos that need protecting. What they do have is a tendency to develop certain patterns of behavior that can be difficult to correct through general guidance alone.

Our „common mistakes“ section is essentially a list of specific behaviors we’ve observed the assistant doing repeatedly that we want to prevent. Each item follows a simple format:

- DON’T [specific action or behavior], INSTEAD [correct approach]

For example:

- DON’T create a new booking without first checking if the resource is available. INSTEAD, use the GetAvailability tool before attempting to create a booking.
 
- DON’T provide generic responses when users report errors. INSTEAD, ask for specific error messages and use the troubleshootError tool.
 
- DON’T guess resource IDs or user permissions. INSTEAD, use the appropriate lookup tools to verify information.

What makes this approach so effective? Several things:

Directness: There’s no ambiguity in „DON’T do X.“ It creates a clear boundary that the AI tends to respect.

Specificity: Each item addresses a concrete behavior we’ve actually observed, not hypothetical issues.

Alternatives: By pairing each „don’t“ with an „instead,“ we’re not just setting boundaries but redirecting toward better approaches.

Visibility: Having these common pitfalls collected in one place makes them easy to review and update as we identify new issues.

Can you believe how dramatically this simple section improved our assistant’s performance? Issues that we had been struggling to fix through complex prompt engineering suddenly disappeared when we explicitly called them out in this format.

For instance, we had a persistent problem where the assistant would try to be helpful by guessing at information it didn’t have, like assuming a user had booking permissions for a resource. We had tried addressing this through various prompt adjustments without success. When we added a direct „DON’T guess how features work, always check the help center first“ instruction to this section, the problem vanished almost completely.

This approach also creates a practical feedback loop for improvement. When we observe the assistant making a new type of mistake repeatedly, we don’t need to rethink our entire prompt strateg, we simply add another item to this list.

The beauty of this section is that it grows organically based on real-world usage. We started with just three or four items that addressed our most pressing issues. Over time, we’ve expanded it to about ten items that cover the most common pitfalls we’ve observed.

The key insight here is that sometimes the most effective instruction is the most direct. While nuanced, contextual guidance has its place, there’s also tremendous value in simply saying „don’t do this specific thing we’ve seen you do before.“

Part 4: General Guidelines and Boundaries

The final piece of our framework addresses how the assistant should conduct itself across all interactions: its tone, boundaries, and general approach to helping users. Think of this as setting the assistant’s professional standards and defining its limitations.

Unlike the previous sections that focus on what the assistant knows and does, this section is about how it behaves. It covers aspects like:

  • Communication style (formal vs. informal)
  • How to handle uncertainty
  • When to escalate to human support
  • How to respond to out-of-scope requests
  • General principles for user interaction

For the Locaboo Assistant, this section includes guidelines like:

- Address users formally
- When uncertain about an answer, clearly state your limitations rather than guessing
- If a user request falls outside your capabilities, politely explain and offer to connect them with customer support
- Focus on being helpful and accurate rather than verbose

What makes this section valuable is that it establishes consistent behavioral patterns across all interactions. Rather than having to specify these guidelines repeatedly throughout other sections, we define them once and apply them universally.

One particularly important element here is defining how the assistant should handle its limitations. Every AI has boundaries — things it can’t do or doesn’t know. By explicitly addressing how to handle these situations, we prevent the assistant from making up answers or overstepping its role.

For example, our assistant occasionally receives requests about municipal regulations that vary by location and change over time — information it can’t reliably provide. Rather than attempting to answer these questions (and potentially providing outdated or incorrect information), our guidelines instruct it to clearly acknowledge when a question requires specialized knowledge beyond its capabilities and suggest appropriate resources.

I’ve found that being explicit about these guidelines prevents many potential problems before they occur. When we first deployed the assistant without clear boundaries, we saw instances where it would try to be overly helpful by attempting tasks beyond its capabilities or providing speculative answers to questions it couldn’t accurately address.

The key insight here is that an effective assistant needs to understand not just what it can do, but also what it shouldn’t do. By establishing clear guidelines and boundaries, we create a foundation for responsible and trustworthy AI behavior.

This section doesn’t need to be extensive — ours is just 5-6 bullet points — but it should cover the core principles that guide the assistant’s interactions across all scenarios. Think of it as defining the assistant’s professional ethics and standards of conduct.

Why This Simple Approach Works Better

After implementing our four-part framework, something remarkable happened: our assistant became not just more capable, but more reliable and easier to maintain. But why does this simple approach work so well when more complex prompting strategies often fall short?

I believe there are several key factors at play:

It aligns with how AI language models actually work. Modern AI systems like those powering our assistant don’t process information the way humans do. They don’t benefit from elaborate storytelling or complex hierarchical instructions. What they do benefit from is clear, structured information that helps them understand their role, capabilities, and constraints. Our framework provides exactly that—no more, no less.

It separates concerns effectively. Each section of our framework addresses a distinct aspect of the assistant’s functioning: context, capabilities, pitfalls, and behavioral guidelines. This separation makes it easier to update individual components without disrupting the whole. When we add a new tool, we only need to update the tools section. When we notice a new common mistake, we add it to the mistakes section without touching anything else.

It focuses on practical guidance rather than theoretical perfection. Rather than trying to create the perfect prompt that anticipates every possible scenario, our approach focuses on practical guidance derived from actual usage patterns. The „common mistakes“ section in particular evolves based on real-world observations, not theoretical edge cases.

It’s maintainable over time. As anyone who’s worked with AI systems knows, maintenance is crucial. Requirements change, new features are added, and unexpected user behaviors emerge. Our simple framework makes ongoing maintenance straightforward. Each section has a clear purpose, making it obvious where new information should go and how it should be structured.

The results speak for themselves. Since implementing this approach:

  • Our assistant successfully completes about 30% more user requests without human intervention
  • We’ve seen a 60% reduction in instances where the assistant provides incorrect information
  • User satisfaction ratings have increased by 25%

Perhaps most tellingly, our team now spends far less time „prompt engineering“ and more time on substantive improvements to the assistant’s capabilities. We’re no longer endlessly tweaking word choices or reorganizing complex prompt hierarchies — we’re focused on adding valuable features and tools.

And do you know what’s particularly satisfying? The simplicity of this approach makes it accessible to everyone on our team. You don’t need to be a „prompt engineering expert“ to understand and contribute to our assistant’s development. Product managers, developers, and customer support staff can all suggest improvements within this straightforward framework.

This democratization of AI development has led to better ideas and more diverse perspectives being incorporated into our assistant. When someone from customer support notices a common user question that the assistant handles poorly, they can suggest a specific addition to the „common mistakes“ section without needing to understand the intricacies of prompt engineering.

The key insight here is that effective AI prompting isn’t about complexity — it’s about clarity, structure, and practical guidance derived from real-world usage. By focusing on these principles, we’ve created an assistant that continues to improve with minimal maintenance overhead.

Conclusion: Simplicity as a Strategy

When I started building the Locaboo Assistant, I thought the path to an effective AI agent would be paved with intricate prompts and complex engineering. What I discovered instead was that simplicity isn’t just easier — it’s actually more effective.

The most valuable lesson I’ve learned while discovering this four-part framework is that sometimes the best solution isn’t adding more complexity — it’s stripping away everything that isn’t essential. This principle extends far beyond AI prompting. In product development, user experience design, and even communication, finding the simplest effective approach often yields the best results.

So what does this mean for you if you’re building your own AI assistant or just trying to get better results from AI tools?

First, don’t get caught up in prompt engineering mysticism. You don’t need elaborate role-playing scenarios or complex formatting tricks to get good results. Focus instead on clearly communicating what the AI needs to know to be helpful in your specific context.

Second, learn from real usage rather than theoretical edge cases. The most valuable improvements to our assistant came from observing actual failures and addressing them directly, not from anticipating hypothetical problems.

Third, embrace the power of explicit guidance. Sometimes the most effective instruction is simply „don’t do this specific thing we’ve seen you do before.“ AI systems don’t get offended by direct feedback—they thrive on it.

Finally, remember that maintainability matters. The best prompt isn’t the one that works perfectly today but can never be updated. It’s the one that works well and can evolve easily as your needs change.

I’d love to hear about your experiences with AI assistants and prompting strategies. Have you found similar benefits from simplifying your approach? Or have you discovered other frameworks that work well for your specific needs? The field is evolving rapidly, and we all have something to learn from each other’s experiences.

As we continue developing the Locaboo Assistant, I’m excited to see how this simple framework evolves to meet new challenges. But one thing I’m confident about: the solution to those challenges won’t be adding more complexity — it will be finding the simplest path to effectiveness.

Because sometimes, the most sophisticated approach is the one that looks obvious in hindsight.

Imprint

This website is created and run by

Daniel Benner

Zur Deutschen Einheit 2

81929 München

Germany

hello(a)danielbenner.de