Indie Dev Dan Teaches Prompt Engineering Master Class for ENGINEERS
Mastering the Art of Prompt Engineering: A Four-Level Framework for 2025 and Beyond (Beginner to Advanced)
Introduction
In the rapidly evolving landscape of AI, especially with the emergence of advanced language models like Quinn 2.5 and the promise of next-generation models, one skill has become paramount: prompt engineering. What was once considered a novelty is now arguably the most valuable skill you can develop. This blog post will guide you through a four-level framework designed to transform basic prompts into powerful, reusable assets that can scale across your projects. By the end of this tutorial, you’ll understand how to craft effective prompts, use them across various models, and build a robust prompt library. This tutorial should take approximately 45-60 minutes to complete.
Prerequisites:
- Basic understanding of command-line interfaces (CLI)
- Familiarity with code editors (like Cursor, VS Code, Sublime Text)
- Some interest in generative AI and large language models (LLMs)
Level 1: Ad-Hoc Prompts – The Basics
Getting Started with CLI Tools
To begin, we’ll use the llm
and olama
libraries, which allow us to run prompts directly from the terminal. This approach provides excellent control over our experiments.
- Setup:
- Open your terminal and navigate to a temporary directory.
- Create an empty text file named
prompt.txt
using the commandtouch prompt.txt
.
- Basic Prompting:
- Write a simple instruction in
prompt.txt
, such asping
. - Execute the prompt using
llm < prompt.txt
. You’ll receive a response like “pong”. -
Tip: The
llm
library facilitates prompt execution through the command line. You can find the links in the resources below.
- Write a simple instruction in
- Experimenting with Simple Instructions:
- Modify
prompt.txt
with instructions likecount to 10 then back to zero
, and rerun the command. - Add keywords such as
python:
orXML:
before the instruction to get different output formats likepython: count to 10 then back to zero
.
- Modify
Model Exploration
- Model Selection:
- Run
llm models
to see a list of available models, including Gemini, Anthropic, and OpenAI.
- Run
- Model-Specific Execution
- To run the prompt using a specific model, use the
-m
flag followed by the model alias. For examplellm -m gemini-experimental < prompt.txt
- Experiment with different models, such as
gemini-experimental
to observe varied outputs.
- To run the prompt using a specific model, use the
Example: SQL Table Creation
- Initial Prompt
- Update
prompt.txt
to request a SQL table definition using a pseudo JSON format. For example,Output a SQL light table user with ID, email address and is member
.
-
Iterative Tuning:
- Fine-tune the prompt by adding constraints like
exclusively
to refine the output.
- Fine-tune the prompt by adding constraints like
-
Local Model Providers:
- Run prompts using
olama
with models likellama3
(e.g.olama run llama3 < prompt.txt
).
- Run prompts using
-
Test out various locally available models such as
Quinn
Warning: Smaller models might not perfectly follow instructions, which is expected. More powerful models usually give better outputs.
Rapid Prototyping
- Terminal Flexibility:
- The combination of
llm
andolama
allows rapid experimentation across models and providers.
- Model Comparison
- Test the same prompt on different models, such as CLAUDE, and observe variations in the output.
- Batch Execution
- Create a shell script (
many_models.sh
) to run a prompt across multiple models. - Use a quick prompt to generate the commands automatically using cursor tab.
- Execute the script to get outputs across all specified models.
- Create a shell script (
Key Takeaway: Level one prompts provide foundational control and understanding of how different models react to specific instructions.
Level 2: Reusable Prompts – Static Variables
Level two prompts take it a step further by defining static variables within a structured format, making them more reusable.
XML Structured Prompts
- Template Creation:
- We use an XML format, due to its effectiveness, particularly with complex prompts, due to how LLMs were trained.
- Use a code snippet (like
px1
in the example) to create a template:<prompt> <purpose></purpose> <instructions> <instruction></instruction> </instructions> <interface_block></interface_block> </prompt>
- Defining Purpose and Instructions:
- Fill in the
purpose
with a clear statement (e.g., “convert the typescript interface into a SQL table”). - Add specific
instructions
, such as “use postgress dialect and only respond with the table definition.”
- Fill in the
- Static Variable:
- Add a placeholder like
<interface_block></interface_block>
as a static variable, which will hold the TypeScript interface.
- Add a placeholder like
- Saving Reusable Prompts:
- Save this template as a
.xml
file (e.g.ts_to_sql_table.xml
).
Practical Application
- Interface Input:
- Paste a TypeScript interface into the
<interface_block>
, for example:
interface users { ID: string; email: string; created: number; isMember: boolean; } ```
- Paste a TypeScript interface into the
- Running the Prompt:
- Execute the prompt with an LLM, such as
gpt-4o-mini
and observe the output.
-
Adapting and Modifying
-
Try modifying the interface and the target output to see the flexibility this gives.
-
Try different models such as Quinn for the same task.
Tip: Using XML for prompts provides better performance, especially as the prompts become more complex.
-
Adding Instructions Dynamically
- Expanding Instructions:
- Use the
<instruction></instruction>
tags to add more rules to your instructions, for example:
<instruction>Also respond with create update, select and delete statements for this table</instruction>
- Use the
- Executing the Modified Prompt
- Run the modified prompt with models like
Quinn
to observe how additional instructions are handled.
- Run the modified prompt with models like
Key Takeaway: Level two prompts enhance reusability with static variables and structured instructions, leading to better and more predictable output.
Level 3: Template Prompts – Examples
Level three introduces prompt templating through examples to guide the LLM toward desired outputs.
Prompt Structure
-
Template Creation:
- Create a new prompt using a code snippet like
px2
that includes examples and placeholders:
<prompt> <purpose></purpose> <instructions></instructions> <example_output></example_output> <content></content> </prompt>
- Create a new prompt using a code snippet like
-
Purpose and Instructions
- Define the
purpose
as “summarize the given content based on the instructions and example output”. - Set
instructions
for how to summarize: “Summarize into four sections: high-level summary, main points, sentiment, and three hot takes biased toward and against the author”. - Specify the output format, such as markdown.
- Define the
-
Example Output:
- Construct a sample output with the four sections:
# Title ## High-level summary ... ## Main points ... ## Sentiment ... ## Hot takes toward the author ... ## Hot takes against the author ...
-
Content Placeholder
- Add a placeholder where the main content goes.
-
Save the Template
- Save the template into an
.xml
file, such asspicy_summarize.xml
- Save the template into an
Practical Application
-
Content Insertion
- Copy the blog content you want to summarize into the content section of your prompt.
- Run the prompt with a model such as
gemini-experimental
-
Analyze the Output
- View the output and verify it aligns with the specified output format and instructions.
- Check that the hot takes provide biased perspectives both for and against the author.
-
Flexibility Across Models
- Like before, these prompts can be used with any model.
-
Batch Processing
- Similar to before, use the
many_models
script to get multiple summaries from different models.
Output Management
- Directing to a File:
- Use shell redirection to output the result directly to a
.md
file by using the>
operator, like:
- Use shell redirection to output the result directly to a
llm -m gemini-flash-2 < spicy_summarize.xml > output_gemini_flash2.md
- Preview the Markdown
- Open the output
.md
file in a preview to see the formatted results.
Key Takeaway: Level three prompts enhance precision with structured examples, guaranteeing the output follows your desired format and instructions.
Level 4: Scalable Prompts – Dynamic Variables
Level four prompts introduce dynamic variables, making the prompt infinitely scalable.
Core Components
- Structure:
- Level four prompts have:
- A dense purpose.
- A static set of instructions.
- Examples to guide the output.
- Dynamic variables.
- Level four prompts have:
- Dynamic Variables:
- Dynamic variables enable updating prompts on the fly.
- The example given was for generating YouTube chapters.
- You will reference these variables in your XML prompt structure.
- Examples of this would be the SEO keywords to add to the chapters and the transcript with timestamps.
Key Takeaway: Level four prompts enable scalability through dynamic variables, and are at the core of most non-chatbot tools and applications today.
Building Tools
- Prompt Libraries:
- Create prompt libraries to manage templates.
- Custom Tooling:
- Build tools to interact with prompt libraries and manage prompt templates, like the MARO tool shown.
- The MARO tool allows you to select models and insert data into dynamic prompt variables.
Conclusion
Mastering prompt engineering is crucial in the current AI landscape. This framework moves from basic prompts (level one), up to reusable static prompts (level two), to example driven prompts (level three), and ultimately to infinitely scalable dynamic prompts (level four). Remember that the prompt itself is the fundamental unit of knowledge. Start building your prompt library today.
Next Steps
- Explore advanced prompt engineering techniques, such as meta-prompting.
- Continue to iterate and experiment with different prompt formats and structures.
- Build your own tooling to make the process easier.
- Consider participating in the AI coding course mentioned in the video.
Resources
- llm library
- olama library
- [Maro Tool](Link to video about Maro Tool)
- [Blog post by Simon Willison on Paul’s Aers Benchmark](Link to blog post)
- [Best Prompt Format Video](Link to video about best prompt format)