How AI Jason Uses V0
Building a Production-Ready AI Analytics Platform with Cursor: A Step-by-Step Guide (Beginner)
Introduction
This blog post will guide you through the process of building a fully functional AI analytics platform using Cursor, an AI code editor that allows you to build applications using natural language. This tutorial will demonstrate how to leverage Cursor’s capabilities to create a social media analytics platform, specifically focusing on Reddit, and how to integrate it with large language model (LLM) monitoring tools. You will learn how to plan your project, define core functionalities, and use AI to streamline your development workflow. This post also shows how to integrate the application with Supabase as a backend and finally how to use Vercel to enhance the UI.
Estimated Time to Complete: 3-4 hours Prerequisites:
- Basic understanding of programming concepts
- Familiarity with JavaScript or TypeScript
- A GitHub account
- Basic understanding of React and Next.js will be beneficial but not required.
- Open AI and Reddit API Keys
What you will learn:
- How to plan and scope a project using AI tools
- How to structure a Next.js application with Chasen UI and Tailwind CSS
- How to fetch data from Reddit using the Snoowrap library
- How to use Open AI to categorize post data
- How to integrate Supabase for backend functionality
- How to use V0 to style the UI
- How to monitor LLM usage with Helicone.
1. Project Planning and Scoping
Before diving into code, planning is essential. This includes defining the core functionalities and selecting the appropriate tools and packages.
1.1 Defining Core Functionalities
The first step is to determine what the core functionalities of our application will be. In this case, we want to replicate and enhance Gummy Search, an AI analytics platform for Reddit.
Core functionalities include:
- Viewing a list of available subreddits.
- Adding new subreddits.
- Displaying a detailed page for each subreddit with top content.
- Categorizing posts based on type (e.g., solution requests, pain points, advice, money talk) using Open AI.
- Bonus: adding new categories.
1.2 Research and Package Selection
Next, we will research necessary libraries and packages:
- Next.js 14: For building the web application framework.
- Chasen UI: A component library for UI elements.
- Tailwind CSS: A utility-first CSS framework.
- Lucid Icons: An icon library for UI elements.
- Snoowrap: A library for fetching Reddit data.
- Open AI: For processing and categorizing post data.
- Zod: For data validation.
- Supabase: For backend database storage
- Helicone: For LLM monitoring.
1.3 Setting up the Project
- Create a new GitHub repository called
reddit-analytics-platform
. - Create an
instructions.md
file in the root directory with the following structure:- Project Overview: Description of the application.
- Core Functionalities: Detailed list of features.
- Technical Stack: List of tools and technologies used.
- Documentation: Code examples and explanations.
- File Structure: Initial directory layout.
2. Setting Up the Development Environment
2.1 Project Initialization
-
Use the Chasen CLI to initialize a new Next.js project.
npx chasin@latest init
Choose project name and select default values for other prompts.
-
Navigate to the new project folder:
cd reddit-analytics-platform
2.2 Installing Dependencies
Install the required packages:
npm install snoowrap openai zod
Also, install the required Chasen components:
npx chasin@latest add badge card input label sheet table tabs
2.3 Setting Up API Keys
- Reddit API Credentials: Go to
reddit.com/prefs/apps
to create a new application. Note the client ID, secret, username and password for creating the API object later. - Open AI API Key: Obtain an API key from the Open AI platform.
Create a .env.local
file in the root directory and add the following variables:
REDDIT_CLIENT_ID=<your_reddit_client_id>
REDDIT_CLIENT_SECRET=<your_reddit_client_secret>
REDDIT_USER_NAME=<your_reddit_user_name>
REDDIT_PASSWORD=<your_reddit_password>
OPENAI_API_KEY=<your_openai_api_key>
HELICONE_API_KEY=<your_helicone_api_key>
HELICONE_BASE_URL=<your_helicone_base_url>
SUPABASE_URL=<your_supabase_url>
SUPABASE_ANON_KEY=<your_supabase_anon_key>
SUPABASE_SERVICE_KEY=<your_supabase_service_key>
Note: The Helicone and Supabase keys will be set up later.
2.4 Initial File Structure
Update the instruction.md
with current file structure. Use the following command from the root directory:
tree -L 2 -I "node_modules"
Copy the output to the instruction.md
.
3. Core Functionality Implementation
3.1 Fetching Reddit Data with Snoowrap
- Create a new file named
reddit.ts
inside alib
directory within your project. - Use the following TypeScript code snippet to fetch Reddit post data:
import snoowrap from "snoowrap";
export async function fetchRedditPosts(subreddit: string) {
const r = new snoowrap({
userAgent: "Post categorizer",
clientId: process.env.REDDIT_CLIENT_ID,
clientSecret: process.env.REDDIT_CLIENT_SECRET,
username: process.env.REDDIT_USER_NAME,
password: process.env.REDDIT_PASSWORD,
});
try {
const posts = await r.getSubreddit(subreddit).getHot({ limit: 25 });
return posts.map((post) => ({
title: post.title,
content: post.selftext,
score: post.score,
numComments: post.num_comments,
date: post.created_utc,
id: post.id,
}));
} catch (error) {
console.error("Error fetching Reddit posts:", error);
return [];
}
}
- Add this code as a code example in
instructions.md
to reference later.
3.2 Categorizing Post Data with Open AI
- Add the following code in the
reddit.ts
file to use Open AI’s structured output to categorize Reddit posts:
import OpenAI from "openai";
import { z } from "zod";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const categorySchema = z.object({
solution_request: z
.boolean()
.describe("If the post is about a request for solution."),
pain_point: z.boolean().describe("If the post is about a pain point."),
advice_request: z.boolean().describe("If the post is asking for advice."),
money_talk: z.boolean().describe("If the post is about money"),
});
type Category = z.infer<typeof categorySchema>;
export async function categorizePost(
postContent: string,
): Promise<Category | null> {
try {
const completion = await openai.chat.completions.create({
model: "gpt-4-1106-preview",
messages: [
{
role: "user",
content: `Categorize the following Reddit post based on the categories provided:\n${postContent}`,
},
],
function_call: {
name: "categorize_post",
arguments: categorySchema.description,
},
functions: [
{
name: "categorize_post",
parameters: categorySchema.jsonSchema,
},
],
});
const result = completion.choices[0].message.function_call?.arguments;
if (!result) return null;
return JSON.parse(result);
} catch (error) {
console.error("Error categorizing post:", error);
return null;
}
}
- Add this code as a code example in
instructions.md
to reference later.
3.3 Creating the UI Components
- Implement a component to display the list of available subreddits.
- Implement components to allow users to add new subreddits, for example, a modal.
- Implement a component to display a specific subreddit with two tabs: “Top Posts” and “Categories.”
3.4 Displaying Top Post and Categories
- Fetch data for top posts for the selected subreddit using
fetchRedditPosts
. - Use
categorizePost
to categorize each post. - Display the posts and their categories on the UI using
Chasen UI
components. - Add the ability to add new categories.
4. Integrating Supabase for Backend
4.1 Setting up Supabase
- Create a new project on Supabase.
- Navigate to
Project Settings
->API
and obtain the Supabase URL and anon key. - Add these to the
.env.local
file. - Go to
SQL Editor
and paste the following code, and click “Run” to create tables:
create table profiles (
id uuid not null primary key,
tier text,
credit int,
stripe_customer_id text,
stripe_subscription_id text,
created_at timestamptz default now()
);
create table subreddits (
id uuid not null primary key default uuid_generate_v4(),
name text not null,
last_updated_at timestamptz default now(),
created_at timestamptz default now(),
constraint subreddits_name_key unique (name)
);
create table posts (
id uuid not null primary key default uuid_generate_v4(),
subreddit_id uuid not null references subreddits(id),
title text not null,
content text,
score int,
num_comments int,
created_at timestamptz,
reddit_id text not null,
constraint posts_reddit_id_key unique (reddit_id)
);
create table post_categories (
id uuid not null primary key default uuid_generate_v4(),
post_id uuid not null references posts(id),
solution_request boolean,
pain_point boolean,
advice_request boolean,
money_talk boolean,
created_at timestamptz default now()
);
Note: This will create profiles
, subreddits
, posts
and post_categories
tables that will be used later.
4.2 Installing Supabase Client
Install the Supabase client library:
npm install @supabase/supabase-js
4.3 Initializing Supabase Client
- Create a
supabase.ts
file to initialize the client:
import { createClient } from "@supabase/supabase-js";
const supabaseUrl = process.env.SUPABASE_URL;
const supabaseKey = process.env.SUPABASE_ANON_KEY;
export const supabase = createClient(supabaseUrl!, supabaseKey!);
- Add a service key for server side operations
export const supabaseAdmin = createClient(
supabaseUrl!,
process.env.SUPABASE_SERVICE_KEY!,
);
4.4 Saving and Fetching Data from Supabase
- Modify the data fetching logic to save Reddit post data and AI analysis results to Supabase.
- Implement logic to fetch data from Supabase first and only fetch new data from Reddit if the last update time is older than 24 hours.
// In lib/reddit.ts
import { supabase, supabaseAdmin } from "./supabase";
import { Database } from "@/types/supabase";
import { categorizePost, Category } from "./openai";
export async function fetchAndProcessRedditPosts(subredditName: string) {
const subreddits = await supabase
.from("subreddits")
.select()
.eq("name", subredditName)
.limit(1);
let subreddit = subreddits.data?.[0];
if (!subreddit) {
const newSubreddit = await supabaseAdmin
.from("subreddits")
.insert({ name: subredditName })
.select()
.limit(1);
subreddit = newSubreddit.data?.[0];
if (!subreddit) return []; // handle this error if insert fail
}
if (subreddit && subreddit.last_updated_at) {
const lastUpdatedTime = new Date(subreddit.last_updated_at).getTime();
const currentTime = new Date().getTime();
const timeDifferenceInHours =
(currentTime - lastUpdatedTime) / (1000 * 60 * 60);
if (timeDifferenceInHours < 24) {
const posts = await supabase
.from("posts")
.select("*, post_categories(*)")
.eq("subreddit_id", subreddit.id);
return posts.data || [];
}
}
const posts = await fetchRedditPosts(subredditName);
if (!posts) return [];
const processedPosts = await Promise.all(
posts.map(async (post) => {
const category = await categorizePost(post.content || post.title);
return {
...post,
category,
};
}),
);
const newPosts = await Promise.all(
processedPosts.map(async (post) => {
const newPost = await supabaseAdmin
.from("posts")
.upsert({
subreddit_id: subreddit.id,
title: post.title,
content: post.content,
score: post.score,
num_comments: post.numComments,
created_at: new Date(post.date * 1000).toISOString(),
reddit_id: post.id,
})
.select()
.limit(1);
if (newPost.error || !newPost.data) return null;
const newPostCategory = await supabaseAdmin
.from("post_categories")
.insert({
post_id: newPost.data[0].id,
solution_request: post.category?.solution_request,
pain_point: post.category?.pain_point,
advice_request: post.category?.advice_request,
money_talk: post.category?.money_talk,
});
return {
...newPost.data[0],
...post,
category: newPostCategory.data?.[0],
};
}),
);
await supabaseAdmin
.from("subreddits")
.update({ last_updated_at: new Date().toISOString() })
.eq("id", subreddit.id);
return newPosts.filter(Boolean);
}
5. Monitoring LLM Usage with Helicone
5.1 Setting up Helicone
- Create a Helicone account if you don’t have one.
- Obtain the Helicone base URL and API key.
- Add these to the
.env.local
file.
5.2 Integrating Helicone with Open AI
- Modify the open AI calls to add the Helicone API base URL and header.
// In lib/openai.ts
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: process.env.HELICONE_BASE_URL,
defaultHeaders: {
"Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
},
});
5.3 Monitoring LLM calls
- After the application is running navigate to the Helicone dashboard to see the cost, latency and usage of the different API calls.
6. Improving UI with V0
6.1 Using V0 to style components
- Go to V0’s website.
- Copy the code for a specific component (e.g.,
pages/index.tsx
). - Paste the code in V0 and give instructions such as “Make the UI better without changing the functionality”.
- Copy the new generated UI code.
- Replace the existing code in your project.
- Repeat this for other components to enhance the UI.
7. Deployment
7.1 Deploying to Versal
- Create an account on Versal.
- Import the GitHub repository.
- Configure the environment variables.
- Deploy the application.
8. Conclusion and Next Steps
In this tutorial, you have learned how to build a functional AI analytics platform using Cursor, integrate various tools, and set up a back end. You now have a good start to build more complex applications.
Key Takeaways:
- Importance of planning and documentation for AI-assisted development
- Using different tools and techniques to build a full-stack application.
- How to use an AI editor to accelerate and improve your workflow
- How to incorporate LLM monitoring tools to track cost and usage.
Next Steps:
- Implement user authentication.
- Connect to payment gateways like Stripe for user subscriptions.
- Add more data sources.
- Add a chat functionality with the Reddit data.
- Explore more ways to optimize and finetune the LLM calls.
Additional Resources: