My 30-Day Journey Building an AI-Powered App with Next.js and OpenAI

nextjs ai openai startup javascript

The ups, downs, and surprising lessons learned creating an AI writing assistant when our startup funding was running out


Day 1: The Desperation Pitch

"We need a new product. Fast."

Our CEO's words hung in the air of our weekly all-hands meeting. Our startup had six months of runway left, and our main product wasn't gaining traction quickly enough. The next funding round was looking uncertain.

"I've been experimenting with OpenAI's API," I offered, not quite believing what I was about to propose. "What if we build an AI writing assistant? Something that helps content creators draft blog posts, social media updates, and marketing copy."

The room fell silent. Then our product manager spoke up: "How quickly could we build an MVP?"

"Give me 30 days," I said with far more confidence than I felt. "I can build a proof of concept using Next.js and the OpenAI API."

And just like that, I had committed to building an AI-powered application in a month, despite having only tinkered with the OpenAI API for a few hours. What could possibly go wrong?

Day 3: The Architecture Plan

After spending two days researching existing AI writing tools and experimenting with the OpenAI API, I settled on an architecture for our application:

  1. Next.js 14 as the framework, using the App Router for better performance
  2. Tailwind CSS for styling
  3. Vercel for hosting
  4. OpenAI API for text generation
  5. Supabase for authentication and storing user data
  6. Stripe for payments

The application would start simple: users could create a new document, select a content type (blog post, social media, email), provide a brief, and the AI would generate content. Users could then edit the generated content, save it, and export it.

I created a new Next.js project with the following directory structure:

src/
  app/
    api/
      completion/
        route.ts
      documents/
        route.ts
    dashboard/
      page.tsx
    editor/
      [id]/
        page.tsx
    auth/
      sign-in/
        page.tsx
      sign-up/
        page.tsx
    page.tsx
  components/
    ui/
    editor/
    templates/
  lib/
    supabase.ts
    openai.ts
  styles/
    globals.css

This structure would allow for a clean separation of concerns while taking advantage of Next.js 14's server components and API routes.

Day 5: The First OpenAI Integration

My first challenge was integrating the OpenAI API. I wanted to create a simple yet flexible abstraction that would allow us to:

  1. Send prompts to the API
  2. Stream the responses back to the client
  3. Keep track of tokens used for both billing and rate limiting
  4. Handle errors gracefully

I started with a simple API route:

// src/app/api/completion/route.ts
import { OpenAIStream, StreamingTextResponse } from "ai";
import { openai } from "@/lib/openai";
import { auth } from "@/lib/auth";
 
export async function POST(req: Request) {
  try {
    // Verify authentication
    const session = await auth();
    if (!session) {
      return new Response("Unauthorized", { status: 401 });
    }
 
    // Parse the request body
    const { prompt, template, maxTokens = 500 } = await req.json();
 
    // Construct the full prompt with template
    const fullPrompt = constructPrompt(prompt, template);
 
    // Create the completion with streaming
    const response = await openai.completions.create({
      model: "gpt-4",
      prompt: fullPrompt,
      max_tokens: maxTokens,
      temperature: 0.7,
      stream: true
    });
 
    // Convert the response to a streaming text response
    const stream = OpenAIStream(response);
    return new StreamingTextResponse(stream);
  } catch (error) {
    console.error("Error in completion route:", error);
    return new Response(
      JSON.stringify({ error: "Failed to generate completion" }),
      {
        status: 500,
        headers: { "Content-Type": "application/json" }
      }
    );
  }
}
 
function constructPrompt(userPrompt: string, template: string) {
  // Inject the user prompt into the template
  return template.replace("{{PROMPT}}", userPrompt);
}

For the frontend, I created a simple editor component that would send prompts to the API and display the streamed response:

// src/components/editor/PromptEditor.tsx
"use client";
 
import { useState } from "react";
import { useCompletion } from "ai/react";
 
export function PromptEditor() {
  const [prompt, setPrompt] = useState("");
  const [template, setTemplate] = useState("blog-post");
 
  const { complete, completion, isLoading, error } = useCompletion({
    api: "/api/completion"
  });
 
  const handleSubmit = async (e: React.FormEvent) => {
    e.preventDefault();
    complete({ prompt, template });
  };
 
  return (
    <div className="space-y-4">
      <form onSubmit={handleSubmit} className="space-y-4">
        <div>
          <label htmlFor="template" className="block text-sm font-medium">
            Content Type
          </label>
          <select
            id="template"
            value={template}
            onChange={(e) => setTemplate(e.target.value)}
            className="mt-1 block w-full rounded-md border-gray-300 shadow-sm"
          >
            <option value="blog-post">Blog Post</option>
            <option value="social-media">Social Media Post</option>
            <option value="email">Email</option>
          </select>
        </div>
 
        <div>
          <label htmlFor="prompt" className="block text-sm font-medium">
            What would you like to write about?
          </label>
          <textarea
            id="prompt"
            value={prompt}
            onChange={(e) => setPrompt(e.target.value)}
            rows={4}
            className="mt-1 block w-full rounded-md border-gray-300 shadow-sm"
            placeholder="Describe your content here..."
          />
        </div>
 
        <button
          type="submit"
          disabled={isLoading || !prompt}
          className="inline-flex justify-center rounded-md border border-transparent bg-blue-600 px-4 py-2 text-sm font-medium text-white shadow-sm hover:bg-blue-700"
        >
          {isLoading ? "Generating..." : "Generate Content"}
        </button>
      </form>
 
      {error && (
        <div className="rounded-md bg-red-50 p-4">
          <div className="text-sm text-red-700">{error.message}</div>
        </div>
      )}
 
      {completion && (
        <div className="rounded-md border p-4 prose prose-sm max-w-none">
          {completion}
        </div>
      )}
    </div>
  );
}

I deployed this simple version to Vercel and shared it with the team. The excitement was palpable—we had a working AI writing assistant in just five days!

But my celebration was short-lived. I soon realized that this simple implementation had several critical flaws:

  1. The generated content wasn't being saved anywhere
  2. Users couldn't edit or refine the generated content
  3. We had no way to track usage or implement billing
  4. The UI was basic and not user-friendly

It was a start, but there was still so much to do.

Day 8: The Token Consumption Crisis

"We've already spent $120 on API calls? In three days?"

Our CEO was not happy. I had underestimated how quickly OpenAI API costs could add up, especially with the GPT-4 model we were using for better quality outputs.

I quickly implemented rate limiting and model switching:

// src/lib/rate-limit.ts
import { Redis } from "@upstash/redis";
import { Ratelimit } from "@upstash/ratelimit";
 
const redis = new Redis({
  url: process.env.UPSTASH_REDIS_URL!,
  token: process.env.UPSTASH_REDIS_TOKEN!
});
 
// Create a rate limiter that allows 50 requests per day per user
export const rateLimiter = new Ratelimit({
  redis,
  limiter: Ratelimit.slidingWindow(50, "1 d")
});
 
// src/app/api/completion/route.ts (updated)
import { rateLimiter } from "@/lib/rate-limit";
 
export async function POST(req: Request) {
  try {
    // Verify authentication
    const session = await auth();
    if (!session) {
      return new Response("Unauthorized", { status: 401 });
    }
 
    // Apply rate limiting
    const userId = session.user.id;
    const { success, remaining } = await rateLimiter.limit(userId);
 
    if (!success) {
      return new Response(
        JSON.stringify({
          error: "Rate limit exceeded",
          remaining: 0
        }),
        {
          status: 429,
          headers: { "Content-Type": "application/json" }
        }
      );
    }
 
    // Parse the request body
    const {
      prompt,
      template,
      maxTokens = 500,
      model = "gpt-3.5-turbo" // Default to cheaper model
    } = await req.json();
 
    // Use GPT-4 only for premium users
    const actualModel = session.user.isPremium
      ? model === "gpt-4"
        ? "gpt-4"
        : "gpt-3.5-turbo"
      : "gpt-3.5-turbo";
 
    // Rest of the function stays the same
    // ...
  } catch (error) {
    // Error handling
  }
}

I also implemented a token counting utility to estimate costs before making API calls:

// src/lib/token-counter.ts
import { encode } from "gpt-tokenizer";
 
export function countTokens(text: string): number {
  return encode(text).length;
}
 
export function estimateCost(
  tokens: number,
  model: "gpt-3.5-turbo" | "gpt-4"
): number {
  // Current pricing as of 2024
  const pricing = {
    "gpt-3.5-turbo": {
      input: 0.0000005, // $0.0005 per 1000 tokens
      output: 0.0000015 // $0.0015 per 1000 tokens
    },
    "gpt-4": {
      input: 0.00003, // $0.03 per 1000 tokens
      output: 0.00006 // $0.06 per 1000 tokens
    }
  };
 
  // Rough estimate assuming equal input/output
  const halfTokens = tokens / 2;
  const inputCost = halfTokens * pricing[model].input;
  const outputCost = halfTokens * pricing[model].output;
 
  return inputCost + outputCost;
}

These changes helped, but it was clear we needed a more sustainable approach to AI usage.

Day 12: The Editor Breakthrough

After working through the token consumption issues, I focused on improving the editor experience. Users needed to be able to edit, format, and save the generated content.

I decided to implement a rich text editor using Lexical, Meta's framework for building rich text editors:

// src/components/editor/ContentEditor.tsx
"use client";
 
import { useState, useEffect } from "react";
import { LexicalComposer } from "@lexical/react/LexicalComposer";
import { RichTextPlugin } from "@lexical/react/RichTextPlugin";
import { ContentEditable } from "@lexical/react/ContentEditable";
import { HistoryPlugin } from "@lexical/react/HistoryPlugin";
import { AutoFocusPlugin } from "@lexical/react/AutoFocusPlugin";
import LexicalErrorBoundary from "@lexical/react/LexicalErrorBoundary";
import { HeadingNode, QuoteNode } from "@lexical/rich-text";
import { ListItemNode, ListNode } from "@lexical/list";
import { CodeHighlightNode, CodeNode } from "@lexical/code";
import { TableNode, TableCellNode, TableRowNode } from "@lexical/table";
import { ToolbarPlugin } from "./plugins/ToolbarPlugin";
import { ListPlugin } from "@lexical/react/ListPlugin";
import { MarkdownShortcutPlugin } from "@lexical/react/MarkdownShortcutPlugin";
import { TRANSFORMERS } from "@lexical/markdown";
 
import { useSaveDocument } from "@/hooks/useSaveDocument";
 
export function ContentEditor({
  initialContent,
  documentId
}: {
  initialContent: string;
  documentId: string;
}) {
  const { saveDocument, isSaving } = useSaveDocument(documentId);
 
  const initialConfig = {
    namespace: "MyEditor",
    theme: {
      // Theme configuration
    },
    onError(error: Error) {
      console.error(error);
    },
    nodes: [
      HeadingNode,
      ListNode,
      ListItemNode,
      QuoteNode,
      CodeNode,
      CodeHighlightNode,
      TableNode,
      TableCellNode,
      TableRowNode
    ]
  };
 
  return (
    <div className="content-editor">
      <LexicalComposer initialConfig={initialConfig}>
        <div className="editor-container">
          <ToolbarPlugin />
          <div className="editor-inner">
            <RichTextPlugin
              contentEditable={<ContentEditable className="editor-input" />}
              placeholder={
                <div className="editor-placeholder">Start writing...</div>
              }
              ErrorBoundary={LexicalErrorBoundary}
            />
            <HistoryPlugin />
            <AutoFocusPlugin />
            <ListPlugin />
            <MarkdownShortcutPlugin transformers={TRANSFORMERS} />
          </div>
        </div>
        <button
          onClick={() => saveDocument()}
          disabled={isSaving}
          className="save-button"
        >
          {isSaving ? "Saving..." : "Save"}
        </button>
      </LexicalComposer>
    </div>
  );
}

This editor gave users a familiar rich text editing experience while preserving the generated AI content's structure.

Day 18: The Authentication and Database Challenge

With the core AI and editor functionality in place, I needed to implement proper authentication and document storage. I chose Supabase for its simplicity and powerful features:

// src/lib/supabase.ts
import { createClient } from "@supabase/supabase-js";
 
export const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);
 
// src/app/api/documents/route.ts
import { supabase } from "@/lib/supabase";
import { auth } from "@/lib/auth";
 
export async function POST(req: Request) {
  try {
    const session = await auth();
    if (!session) {
      return new Response("Unauthorized", { status: 401 });
    }
 
    const { title, content, contentType } = await req.json();
 
    const { data, error } = await supabase
      .from("documents")
      .insert({
        user_id: session.user.id,
        title,
        content,
        content_type: contentType
      })
      .select()
      .single();
 
    if (error) throw error;
 
    return new Response(JSON.stringify(data), {
      headers: { "Content-Type": "application/json" }
    });
  } catch (error) {
    console.error("Error creating document:", error);
    return new Response(
      JSON.stringify({ error: "Failed to create document" }),
      {
        status: 500,
        headers: { "Content-Type": "application/json" }
      }
    );
  }
}
 
export async function GET(req: Request) {
  try {
    const session = await auth();
    if (!session) {
      return new Response("Unauthorized", { status: 401 });
    }
 
    const url = new URL(req.url);
    const limit = parseInt(url.searchParams.get("limit") || "10");
    const page = parseInt(url.searchParams.get("page") || "1");
    const offset = (page - 1) * limit;
 
    const { data, error, count } = await supabase
      .from("documents")
      .select("*", { count: "exact" })
      .eq("user_id", session.user.id)
      .order("created_at", { ascending: false })
      .range(offset, offset + limit - 1);
 
    if (error) throw error;
 
    return new Response(
      JSON.stringify({
        documents: data,
        total: count,
        page,
        limit
      }),
      {
        headers: { "Content-Type": "application/json" }
      }
    );
  } catch (error) {
    console.error("Error fetching documents:", error);
    return new Response(
      JSON.stringify({ error: "Failed to fetch documents" }),
      {
        status: 500,
        headers: { "Content-Type": "application/json" }
      }
    );
  }
}

I also created a dashboard to display all of a user's documents:

// src/app/dashboard/page.tsx
import { auth } from "@/lib/auth";
import { supabase } from "@/lib/supabase";
import { DocumentCard } from "@/components/DocumentCard";
import { CreateDocumentButton } from "@/components/CreateDocumentButton";
 
export default async function DashboardPage() {
  const session = await auth();
 
  if (!session) {
    return redirect("/auth/sign-in");
  }
 
  const { data: documents } = await supabase
    .from("documents")
    .select("*")
    .eq("user_id", session.user.id)
    .order("updated_at", { ascending: false })
    .limit(10);
 
  return (
    <div className="container mx-auto py-8">
      <div className="flex justify-between items-center mb-8">
        <h1 className="text-2xl font-bold">Your Documents</h1>
        <CreateDocumentButton />
      </div>
 
      <div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-6">
        {documents?.length ? (
          documents.map((doc) => <DocumentCard key={doc.id} document={doc} />)
        ) : (
          <div className="col-span-full text-center py-12">
            <p className="text-gray-500 mb-4">
              You haven't created any documents yet.
            </p>
            <CreateDocumentButton />
          </div>
        )}
      </div>
    </div>
  );
}

Day 25: The Prompt Engineering Struggle

With the technical infrastructure in place, I faced an unexpected challenge: creating effective prompts for different types of content. The quality of the AI-generated text varied wildly depending on the prompt.

I spent several days researching prompt engineering techniques and experimenting with different approaches. Eventually, I created a library of prompt templates for different content types:

// src/lib/prompt-templates.ts
export const promptTemplates = {
  "blog-post": `
    You are an expert content writer creating a blog post.
    
    Topic: {{PROMPT}}
    
    Write a comprehensive blog post about this topic. Include:
    - An engaging introduction
    - 3-5 main sections with headings
    - Practical insights and actionable advice
    - A conclusion that summarizes key points
    
    Use a conversational, informative tone. The post should be 800-1200 words.
    Do not include placeholder text or notes to the editor.
  `.trim(),
 
  "social-media": `
    You are a social media expert crafting engaging posts.
    
    Topic: {{PROMPT}}
    
    Create 5 different social media posts about this topic, each 280 characters or less.
    Each post should have:
    - Engaging hook
    - Clear message
    - Call to action
    - Relevant hashtags (2-3 per post)
    
    Format each post separately with "Post 1:", "Post 2:", etc.
  `.trim(),
 
  "email-newsletter": `
    You are an email marketing specialist writing a newsletter.
    
    Topic: {{PROMPT}}
    
    Write an email newsletter about this topic that includes:
    - An attention-grabbing subject line (marked as "Subject:")
    - A personal greeting
    - An engaging introduction paragraph
    - Main content with 2-3 key points or updates
    - A clear call-to-action
    - A friendly sign-off
    
    The tone should be professional but conversational. Length should be around 300-500 words.
  `.trim()
 
  // Additional templates for other content types
};

I also created a prompt customization interface that allowed users to tweak the templates to their liking:

// src/components/PromptCustomizer.tsx
"use client";
 
import { useState } from "react";
import { promptTemplates } from "@/lib/prompt-templates";
 
export function PromptCustomizer({
  contentType,
  onCustomizePrompt
}: {
  contentType: keyof typeof promptTemplates;
  onCustomizePrompt: (customPrompt: string) => void;
}) {
  const [customPrompt, setCustomPrompt] = useState(
    promptTemplates[contentType]
  );
 
  return (
    <div className="space-y-4">
      <h3 className="text-lg font-medium">Customize AI Instructions</h3>
      <p className="text-sm text-gray-500">
        Modify the instructions to get different results from the AI. Use{" "}
        {{ PROMPT }} to reference your main topic.
      </p>
 
      <textarea
        value={customPrompt}
        onChange={(e) => setCustomPrompt(e.target.value)}
        rows={10}
        className="w-full rounded-md border-gray-300 shadow-sm"
      />
 
      <div className="flex justify-end">
        <button
          onClick={() => onCustomizePrompt(customPrompt)}
          className="px-4 py-2 bg-blue-600 text-white rounded-md"
        >
          Save Custom Instructions
        </button>
      </div>
    </div>
  );
}

This was a game-changer for the quality of our generated content. Users could now customize the AI's instructions to match their specific writing style and content needs.

Day 30: Launch Day and Lessons Learned

After a month of intense development, our AI writing assistant was ready for its soft launch. We called it "DraftGenius" and released it to a small group of beta testers.

The feedback was encouraging:

But we also received valuable criticism:

As I reflected on the 30-day journey, I realized I had learned several important lessons:

  1. Start with a clear architecture: Having a solid plan from the beginning saved me countless hours of refactoring.

  2. Manage API costs carefully: AI APIs can get expensive quickly. Implement rate limiting and usage tracking from day one.

  3. Focus on the user experience: The quality of the AI is important, but the overall user experience is equally crucial.

  4. Prompt engineering is an art: Creating effective prompts requires experimentation, research, and continuous refinement.

  5. Build incrementally: Starting with a minimal viable product and adding features gradually allowed for faster iteration and feedback.

Our CEO was pleased with the result, and we decided to move forward with DraftGenius as our new flagship product. Six months later, it would become our main source of revenue and secure our next funding round.

The Technical Stack in Review

For those interested in the technical details, here's a summary of what we used to build DraftGenius:

Final Thoughts

Building an AI-powered application in 30 days was an ambitious challenge, but it taught me that combining modern web technologies like Next.js with AI APIs can yield powerful results in a short time.

If you're considering building your own AI-powered application, my advice is to start simple, focus on the user experience, and be prepared to iterate quickly based on feedback.

The future of web development is increasingly intertwined with AI, and frameworks like Next.js make it possible for developers to build sophisticated AI applications without reinventing the wheel.

As for DraftGenius? It's still growing, evolving, and helping content creators overcome writer's block one draft at a time.