Indie Dev Dan's 7 Prompt Chains for Decision Making, Self Correcting, Reliable AI Agents
Title:
Indie Dev Dan’s 7 Prompt Chains for Decision Making, Self-Correction, and Reliable AI Agents
Posted: 2025-01-21
Category: AI for Experts
Description: Learn how to use prompt chains to build AI agents.
Video: Watch on YouTube
Introduction
Welcome! In this post, we’ll explore seven powerful prompt chains that can significantly enhance your prompt engineering and AI agent–building skills. Even if you’re an advanced developer, you may be new to the concept of prompt chaining. A prompt chain is a structured approach to orchestrating large language models (LLMs) across multiple steps—often involving branching logic, iterative refinement, fallback models, or parallel tasks.
If you need a more fundamental introduction, check out our Beginner’s Guide to Prompt Chaining (link to your other article). Otherwise, read on to learn how these seven chaining patterns can help you create more sophisticated, reliable, and agentic AI-driven software.
What You’ll Learn
- Seven distinct prompt chain patterns
- How to combine LLMs and code to produce concrete outputs
- Practical applications of each chain in software development
- Strategies for building better AI agents
Estimated Time to Complete: 30–45 minutes
Prerequisites: Basic programming knowledge and a keen interest in AI
1. The Snowball Prompt Chain
Overview: The Snowball Prompt Chain helps you iteratively develop information over multiple prompts. Each prompt builds on the previous response, gradually increasing the detail and context until you reach a final output.
How It Works
- Base Information: Start with the initial “seed” data or minimal context.
- Iterative Prompts: Each new prompt references and expands the previous output.
- Summary/Format Prompt: Finally, combine all the information into a single cohesive result.
graph LR
A[Base Information] --> B(Prompt 1)
B --> C(Prompt 2)
C --> D(Prompt n)
D --> E[Summary/Format Prompt]
E --> F[Final Output]
Example: Building a Blog Post
- Base Information: “Three unusual use cases for LLMs”
- Prompt 1: Generate click-worthy titles.
- Prompt 2: Create a detailed outline.
- Prompt 3: Draft the actual sections.
- Prompt 4: Combine everything into a formatted markdown document.
# Conceptual Python Example for the Snowball Pattern
base_info = "Three unusual use cases for LLMs"
# Prompt 1: Generate Titles
def generate_titles(base_info):
# Simulate LLM call
return {"title": "Unleashing LLMs: Beyond the Usual", "topic": base_info}
title_data = generate_titles(base_info)
# Prompt 2: Generate Outline
def generate_outline(title_data):
# Simulate LLM call
return {
"sections": [
"Section 1: LLMs for Creative Writing",
"Section 2: LLMs in Healthcare",
"Section 3: LLMs in Environmental Science"
]
}
outline_data = generate_outline(title_data)
# Prompt 3: Generate Content
def generate_content(outline_data):
# Simulate LLM call
return {
"content": {
"Section 1": "Content 1...",
"Section 2": "Content 2...",
"Section 3": "Content 3..."
}
}
content_data = generate_content(outline_data)
# Prompt 4: Format Blog
def format_blog(title_data, content_data):
# Simulate LLM call
return (
f"# {title_data['title']}\n\n"
f"{content_data['content']['Section 1']}\n\n"
f"{content_data['content']['Section 2']}\n\n"
f"{content_data['content']['Section 3']}"
)
blog_post = format_blog(title_data, content_data)
print(blog_post)
Key Use Cases
- Content generation (blog posts, newsletters, whitepapers)
- Research and iterative summary-building
2. The Worker Pattern
Overview: The Worker Pattern is ideal for parallelizing tasks. You generate a set of subtasks, farm them out to separate LLM “workers,” and then combine their outputs.
How It Works
- Planning Prompt: Generates or outlines tasks.
- Worker Prompts: Each prompt handles a single task in parallel.
- Summary/Format Prompt: Collects all worker outputs and unifies them.
graph LR
A[Planning Prompt] --> B(Worker 1)
A --> C(Worker 2)
A --> D(Worker n)
B --> E[Summary/Format Prompt]
C --> E
D --> E
E --> F[Final Output]
Example: Building a File-Writing Module
- Planning Prompt: List out function stubs (e.g.,
write_json
,write_yaml
,write_toml
). - Worker Prompts: Generate the code for each function in parallel.
- Summary Prompt: Combine these functions into a single file.
# Conceptual Worker Pattern Example
def plan_functions():
# Simulate LLM call
return {
"function_definitions": [
{"name": "write_json", "comment": "Writes data to a JSON file"},
{"name": "write_yaml", "comment": "Writes data to a YAML file"},
{"name": "write_toml", "comment": "Writes data to a TOML file"}
]
}
def write_function(func_def):
# Simulate LLM call
if func_def['name'] == "write_json":
return """
def write_json(filename, data):
import json
with open(filename, 'w') as f:
json.dump(data, f)
"""
elif func_def['name'] == "write_yaml":
return """
def write_yaml(filename, data):
import yaml
with open(filename, 'w') as f:
yaml.dump(data, f)
"""
elif func_def['name'] == "write_toml":
return """
def write_toml(filename, data):
import toml
with open(filename, 'w') as f:
toml.dump(data, f)
"""
plan = plan_functions()
# Each worker prompt is handled in parallel in practice
function_codes = [write_function(defn) for defn in plan["function_definitions"]]
def format_output(function_codes):
# Simulate LLM call
imports = "import json\nimport yaml\nimport toml\n\n"
return imports + "\n\n".join(function_codes)
file_module = format_output(function_codes)
print(file_module)
Key Use Cases
- Large-scale data aggregation
- Research tools (e.g., GPT Researcher)
- AI coding assistants that handle multiple file or function outputs
3. The Fallback Prompt Chain
Overview: The Fallback Chain tries a series of prompts or models in order of cost or reliability. Use the cheaper, faster model first, then escalate to more powerful (and often more expensive) models if needed.
How It Works
- Top-Priority Model: Run your fastest/cheapest model.
- Evaluator: Check if the output meets your needs.
- Fallback: If not, run the next model.
- Last Resort: Keep escalating until success or you hit the final model.
graph LR
A[Top Priority Model] --> B{Success?}
B -- Yes --> E[Done]
B -- No --> C[Fallback Model]
C --> D{Success?}
D -- Yes --> E
D -- No --> F[Last Resort Model]
F --> E
Example: Text-to-Speech Pipeline
- Model 1 (Low Cost): Quick, smaller model for TTS.
- Model 2 (Medium): More sophisticated fallback.
- Model 3 (High-End): Final, powerful fallback.
import random
def generate_speech(model, prompt):
# Simulate LLM TTS call
print(f"Trying {model}...")
return "some audio code" # placeholder
def evaluate_output(output):
# 50% success simulation
return random.choice([True, False])
models = ["Haiku", "Sonet", "Opus"]
prompt = "Text to speech for 'hello, world'"
for model in models:
output = generate_speech(model, prompt)
if evaluate_output(output):
print(f"Success with {model}!")
break
else:
print("Falling back to the next model...")
Key Use Cases
- Cost optimization
- Reliability improvement (especially for tasks like text-to-SQL)
- Gradual escalation in model complexity
4. The Decision Maker Prompt Chain
Overview: This pattern uses an LLM to decide which path to take. Based on the decision (e.g., sentiment analysis, classification, user input), you trigger different code or prompt chains.
How It Works
- Analysis Prompt: Evaluate the input.
- Decision Mapping: Possible decisions map to separate actions.
- Action Execution: Execute the correct path.
graph LR
A[Input] --> B(Analysis Prompt)
B --> C{Decision}
C -- Decision 1 --> D[Action 1]
C -- Decision 2 --> E[Action 2]
C -- Decision n --> F[Action n]
D --> G[End]
E --> G
F --> G
Example: Sentiment-Based Action
def analyze_sentiment(text):
# Simulate LLM sentiment analysis
if "amazing" in text: return "positive"
elif "fail" in text: return "negative"
else: return "neutral"
def positive_action(text):
return f"Positive outcome! Full analysis: {text}"
def negative_action(text):
return f"Negative outcome. Further steps needed: {text}"
def neutral_action(text):
return f"No strong sentiment. Logging: {text}"
statements = [
"Our new feature launch has been amazing for conversion",
"The marketing campaign might fail due to budget cuts",
"We have steady engagement metrics"
]
for statement in statements:
sentiment = analyze_sentiment(statement)
if sentiment == "positive":
print(positive_action(statement))
elif sentiment == "negative":
print(negative_action(statement))
else:
print(neutral_action(statement))
Key Use Cases
- Dynamic flow control
- Conversational AI (route requests to different modules)
- Automated or semi-automated business processes
5. Plan and Execute Prompt Chain
Overview: A straightforward two-step sequence: one LLM prompt for planning, another for execution. Often used with “Chain of Thought” prompting to ensure the plan is logical before final output.
How It Works
- Planning Prompt: Generate a detailed plan.
- Execution Prompt: Use that plan to produce final results.
graph LR
A[Start] --> B(Planning Prompt)
B --> C(Execution Prompt)
C --> D[End]
Example: Software Architecture
def plan_architecture(task):
# Simulate LLM call
return f"Plan steps for building {task}"
def execute_architecture(plan):
# Simulate LLM call
return f"Executing plan: {plan}"
task = "AI assistant with text-to-speech and local SQL storage"
plan = plan_architecture(task)
final_output = execute_architecture(plan)
print(final_output)
Key Use Cases
- Complex problem-solving
- Project or architecture planning
- Multi-step processes where a “blueprint” is needed first
6. Human-in-the-Loop Prompt Chain
Overview: A loop of AI output → user feedback → revised AI output. You involve a human at key steps to guide or correct the LLM’s direction.
How It Works
- Initial Prompt: Generate the first version of output.
- User Feedback: Ask the user to confirm or refine results.
- Iterative Prompt: Incorporate feedback to produce improved output.
- Repeat until satisfied.
graph LR
A[Initial Prompt] --> B(Request Feedback)
B --> C(Iterative Prompt)
C --> D{User Satisfied?}
D -- Yes --> E[End]
D -- No --> B
Example: Idea Generation
def generate_ideas(topic):
return ["Idea 1", "Idea 2", "Idea 3"]
def refine_ideas(topic, feedback):
return [f"{feedback} on Idea 1", f"{feedback} on Idea 2"]
topic = "AI Agents"
ideas = generate_ideas(topic)
print("Initial Ideas:", ideas)
while True:
user_input = input("Provide feedback (or type 'done'): ")
if user_input.lower() == "done":
break
ideas = refine_ideas(topic, user_input)
print("Refined Ideas:", ideas)
Key Use Cases
- Chat-style applications
- Creative writing with user input
- Design or brainstorming sessions that need real-time user feedback
7. Self-Correction Agent
Overview: The LLM generates output (e.g., code), runs it or checks it, then self-corrects if it encounters errors. This loop can continue until successful execution.
How It Works
- Generate Output: The LLM attempts to produce a correct result.
- Evaluation: Check for errors.
- Self-Correction Prompt: If errors exist, feed them back to the LLM.
- Iterate until success or a maximum number of tries.
graph LR
A[Prompt] --> B(Execute)
B --> C{Correct?}
C -- Yes --> E[Complete]
C -- No --> D[Self-Correction Prompt]
D --> B
Example: Generating Bash Commands
import random
def generate_bash(prompt):
return "ls" # Simplified LLM call
def execute_bash(command):
# 50% success simulation
return random.choice([True, False])
def self_correct(prev_command, error):
return "ls -a" # Simplified correction
prompt = "List all files in the current directory"
command = generate_bash(prompt)
success = execute_bash(command)
if not success:
corrected_command = self_correct(command, "mock error")
print(f"Self-Corrected Command: {corrected_command}")
else:
print(f"Command succeeded: {command}")
Key Use Cases
- Auto-debugging code
- Execution & review loops
- Continuous improvement for repetitive tasks
Conclusion & Next Steps
You’ve just learned seven powerful prompt chain patterns—each solving a different set of problems in AI agent design. These patterns can help you orchestrate LLMs in a more structured and reliable way, whether you need to break down tasks (Snowball), parallelize them (Worker), handle cost optimization (Fallback), or involve a human for final feedback.
Key Takeaways
- Prompt Chains = Structured Orchestration: They let you go beyond simple “prompt-response” usage of LLMs.
- Tailor Patterns to Your Needs: Some tasks benefit from iterative refinement, others from parallel sub-tasks or fallback models.
- Combine Patterns: Don’t be afraid to mix and match (e.g., a Worker Pattern with a Self-Correction loop).
Further Reading
- For a more fundamental introduction, check out our Beginner’s Guide to Prompt Chaining.
- Watch our YouTube video on advanced prompt engineering: Level Up Your Prompt Engineering.
- Experiment with these patterns in your own code and see which yields the best results.
Remember: The chat interface is just one possible front-end for LLMs. Under the hood, orchestrating your prompts (with patterns like these) is what unlocks truly sophisticated AI agents.
Happy coding and prompt chaining!