ALL POSTS
AI for Experts

Indie Dev Dan Teaches Prompt Engineering Master Class for ENGINEERS

·

Mastering the Art of Prompt Engineering: A Four-Level Framework for 2025 and Beyond (Beginner to Advanced)

Introduction

In the rapidly evolving landscape of AI, especially with the emergence of advanced language models like Quinn 2.5 and the promise of next-generation models, one skill has become paramount: prompt engineering. What was once considered a novelty is now arguably the most valuable skill you can develop. This blog post will guide you through a four-level framework designed to transform basic prompts into powerful, reusable assets that can scale across your projects. By the end of this tutorial, you’ll understand how to craft effective prompts, use them across various models, and build a robust prompt library. This tutorial should take approximately 45-60 minutes to complete.

Prerequisites:

  • Basic understanding of command-line interfaces (CLI)
  • Familiarity with code editors (like Cursor, VS Code, Sublime Text)
  • Some interest in generative AI and large language models (LLMs)

Level 1: Ad-Hoc Prompts – The Basics

Getting Started with CLI Tools

To begin, we’ll use the llm and olama libraries, which allow us to run prompts directly from the terminal. This approach provides excellent control over our experiments.

  1. Setup:
    • Open your terminal and navigate to a temporary directory.
    • Create an empty text file named prompt.txt using the command touch prompt.txt.
  2. Basic Prompting:
    • Write a simple instruction in prompt.txt, such as ping.
    • Execute the prompt using llm < prompt.txt. You’ll receive a response like “pong”.
    • Tip: The llm library facilitates prompt execution through the command line. You can find the links in the resources below.

  3. Experimenting with Simple Instructions:
    • Modify prompt.txt with instructions like count to 10 then back to zero, and rerun the command.
    • Add keywords such as python: or XML: before the instruction to get different output formats like python: count to 10 then back to zero.

Model Exploration

  1. Model Selection:
    • Run llm models to see a list of available models, including Gemini, Anthropic, and OpenAI.
  2. Model-Specific Execution
    • To run the prompt using a specific model, use the -m flag followed by the model alias. For example llm -m gemini-experimental < prompt.txt
    • Experiment with different models, such as gemini-experimental to observe varied outputs.

Example: SQL Table Creation

  1. Initial Prompt
  • Update prompt.txt to request a SQL table definition using a pseudo JSON format. For example, Output a SQL light table user with ID, email address and is member.
  1. Iterative Tuning:

    • Fine-tune the prompt by adding constraints like exclusively to refine the output.
  2. Local Model Providers:

    • Run prompts using olama with models like llama3 (e.g. olama run llama3 < prompt.txt).
  • Test out various locally available models such as Quinn

    Warning: Smaller models might not perfectly follow instructions, which is expected. More powerful models usually give better outputs.

Rapid Prototyping

  1. Terminal Flexibility:
  • The combination of llm and olama allows rapid experimentation across models and providers.
  1. Model Comparison
    • Test the same prompt on different models, such as CLAUDE, and observe variations in the output.
  2. Batch Execution
    • Create a shell script (many_models.sh) to run a prompt across multiple models.
    • Use a quick prompt to generate the commands automatically using cursor tab.
    • Execute the script to get outputs across all specified models.

Key Takeaway: Level one prompts provide foundational control and understanding of how different models react to specific instructions.

Level 2: Reusable Prompts – Static Variables

Level two prompts take it a step further by defining static variables within a structured format, making them more reusable.

XML Structured Prompts

  1. Template Creation:
  • We use an XML format, due to its effectiveness, particularly with complex prompts, due to how LLMs were trained.
  • Use a code snippet (like px1 in the example) to create a template:
    <prompt>
        <purpose></purpose>
        <instructions>
            <instruction></instruction>
        </instructions>
        <interface_block></interface_block>
    </prompt>
  1. Defining Purpose and Instructions:
    • Fill in the purpose with a clear statement (e.g., “convert the typescript interface into a SQL table”).
    • Add specific instructions, such as “use postgress dialect and only respond with the table definition.”
  2. Static Variable:
    • Add a placeholder like <interface_block></interface_block> as a static variable, which will hold the TypeScript interface.
  3. Saving Reusable Prompts:
  • Save this template as a .xml file (e.g. ts_to_sql_table.xml).

Practical Application

  1. Interface Input:
    • Paste a TypeScript interface into the <interface_block>, for example:
    interface users {
         ID: string;
         email: string;
         created: number;
         isMember: boolean;
     }
     ```
  2. Running the Prompt:
  • Execute the prompt with an LLM, such as gpt-4o-mini and observe the output.
  1. Adapting and Modifying

    • Try modifying the interface and the target output to see the flexibility this gives.

    • Try different models such as Quinn for the same task.

    Tip: Using XML for prompts provides better performance, especially as the prompts become more complex.

Adding Instructions Dynamically

  1. Expanding Instructions:
    • Use the <instruction></instruction> tags to add more rules to your instructions, for example:
       <instruction>Also respond with create update, select and delete statements for this table</instruction>
  2. Executing the Modified Prompt
    • Run the modified prompt with models like Quinn to observe how additional instructions are handled.

Key Takeaway: Level two prompts enhance reusability with static variables and structured instructions, leading to better and more predictable output.

Level 3: Template Prompts – Examples

Level three introduces prompt templating through examples to guide the LLM toward desired outputs.

Prompt Structure

  1. Template Creation:

    • Create a new prompt using a code snippet like px2 that includes examples and placeholders:
    <prompt>
        <purpose></purpose>
        <instructions></instructions>
        <example_output></example_output>
        <content></content>
    </prompt>
  2. Purpose and Instructions

    • Define the purpose as “summarize the given content based on the instructions and example output”.
    • Set instructions for how to summarize: “Summarize into four sections: high-level summary, main points, sentiment, and three hot takes biased toward and against the author”.
    • Specify the output format, such as markdown.
  3. Example Output:

    • Construct a sample output with the four sections:
        # Title
        ## High-level summary
        ...
        ## Main points
        ...
        ## Sentiment
        ...
        ## Hot takes toward the author
        ...
        ## Hot takes against the author
        ...
  4. Content Placeholder

    • Add a placeholder where the main content goes.
  5. Save the Template

    • Save the template into an .xml file, such as spicy_summarize.xml

Practical Application

  1. Content Insertion

    • Copy the blog content you want to summarize into the content section of your prompt.
    • Run the prompt with a model such as gemini-experimental
  2. Analyze the Output

    • View the output and verify it aligns with the specified output format and instructions.
    • Check that the hot takes provide biased perspectives both for and against the author.
  3. Flexibility Across Models

    • Like before, these prompts can be used with any model.
  4. Batch Processing

  • Similar to before, use the many_models script to get multiple summaries from different models.

Output Management

  1. Directing to a File:
    • Use shell redirection to output the result directly to a .md file by using the > operator, like:
llm -m gemini-flash-2 < spicy_summarize.xml > output_gemini_flash2.md
  1. Preview the Markdown
  • Open the output .md file in a preview to see the formatted results.

Key Takeaway: Level three prompts enhance precision with structured examples, guaranteeing the output follows your desired format and instructions.

Level 4: Scalable Prompts – Dynamic Variables

Level four prompts introduce dynamic variables, making the prompt infinitely scalable.

Core Components

  1. Structure:
    • Level four prompts have:
      • A dense purpose.
      • A static set of instructions.
      • Examples to guide the output.
      • Dynamic variables.
  2. Dynamic Variables:
    • Dynamic variables enable updating prompts on the fly.
    • The example given was for generating YouTube chapters.
    • You will reference these variables in your XML prompt structure.
    • Examples of this would be the SEO keywords to add to the chapters and the transcript with timestamps.

Key Takeaway: Level four prompts enable scalability through dynamic variables, and are at the core of most non-chatbot tools and applications today.

Building Tools

  1. Prompt Libraries:
    • Create prompt libraries to manage templates.
  2. Custom Tooling:
    • Build tools to interact with prompt libraries and manage prompt templates, like the MARO tool shown.
    • The MARO tool allows you to select models and insert data into dynamic prompt variables.

Conclusion

Mastering prompt engineering is crucial in the current AI landscape. This framework moves from basic prompts (level one), up to reusable static prompts (level two), to example driven prompts (level three), and ultimately to infinitely scalable dynamic prompts (level four). Remember that the prompt itself is the fundamental unit of knowledge. Start building your prompt library today.

Next Steps

  • Explore advanced prompt engineering techniques, such as meta-prompting.
  • Continue to iterate and experiment with different prompt formats and structures.
  • Build your own tooling to make the process easier.
  • Consider participating in the AI coding course mentioned in the video.

Resources

  • llm library
  • olama library
  • [Maro Tool](Link to video about Maro Tool)
  • [Blog post by Simon Willison on Paul’s Aers Benchmark](Link to blog post)
  • [Best Prompt Format Video](Link to video about best prompt format)