← Back to Log
Thursday, February 19 20265 min read

Why I Built the Infrastructure Before Writing a Single Post

What Was Built

I built a production blog in one day using an AI engine that takes content from conversation to live post in under 90 seconds without any manual git commits.

Business Insight

While many businesses are simply bolting AI onto their existing workflows, the real compounding advantage belongs to systems where AI is the fundamental infrastructure rather than a mere assistant. Currently, very few teams are building with that foundational approach.

Friction

Tailwind v4 removed the config file entirely, Next.js 15 made route params async, and React 18 Strict Mode double-fired the agent on load. All three were discovered by shipping, not by reading docs.

ai-nativearchitecturepublishing-pipelinesystems-design

The System Ships Before the Content Does

Most people start a blog by writing. I started by building the publishing engine.

That decision reflects a specific belief: if the infrastructure is right, the content compounds. If it isn't, every post becomes manual overhead that quietly kills the habit. For a business adopting AI, the same principle applies at a much larger scale.

This is day one of a 90-day pivot into AI-native systems development. By the end of it, I want to be the person companies call when they're ready to stop using AI as a productivity tool and start treating it as infrastructure.

This post documents how the first system was built, what it revealed, and why the architectural decision made on day one matters more than it looks.

What I Built

The blog runs on Next.js 15 with the App Router, MDX content, and Tailwind v4. No configuration files. Theme tokens live directly in globals.css. The design is intentionally austere — a dark monospaced interface with a 90-day progress tracker that updates automatically as posts are published.

Content lives in content/posts/ as MDX files with enforced frontmatter schema. Every post must define three fields: what was built, what was learned, and what blocked progress. These are not optional. If those three questions are not answered, the post does not publish. Structure is the discipline.

The admin interface is where the AI layer sits. Instead of a CMS form, it is a conversation. A Groq-powered agent running llama-3.3-70b-versatile asks about what was built today. I respond naturally — by voice or text. The agent extracts structured metadata, writes the post in a defined format, and sends me to a preview page. On approval, the system calls the GitHub API directly, commits the MDX file, and Vercel deploys automatically. A 60-second countdown gates the live link so the deployment finishes before the link appears.

From conversation to live post in under 90 seconds. No git commands. No CMS login. No copy-pasting between tools.

Authentication is enforced at the edge using Next.js Proxy. An httpOnly session cookie is set server-side at login and verified before any admin route renders. Credentials never reach the client.

A concepts index generates automatically from post tags. Every tag becomes a navigable topic. By day 90, it will form a complete knowledge map of the pivot. No manual curation. Structure compounding over time.

Key Concept

There is a difference between AI-assisted and AI-native that most teams are not yet making clearly enough.

AI-assisted is powerful. It makes individuals faster. You open a tool, paste something in, copy something out. The AI sits beside the workflow. You are still driving.

AI-native is different. The AI stops being a tool you reach for and becomes part of the system itself. Inputs go in. Structured outputs come out. The model is infrastructure, not an accessory.

The publishing pipeline built today is AI-native. I do not write posts and ask AI to polish them. I speak to an agent, and a structured post emerges through a pipeline. Swapping the underlying model is one line of configuration. The rest of the system does not care which LLM is running underneath it.

That isolation is intentional. It is also the first real architectural principle of this pivot: treat AI as a replaceable layer inside a larger, well-designed system.

Business Implication

Most companies adopting AI right now are in the assisted phase. They are adding AI tools to existing workflows — Copilot in the IDE, ChatGPT in the browser, a summarisation tool bolted onto the CRM. Each tool makes individuals faster. None of them change the underlying system.

The compounding advantage goes to organisations that make AI structural. When the model is a replaceable layer and the surrounding system is well-designed, you gain three things most AI-assisted teams don't have: auditability, replaceability, and scale. You can swap models as better ones emerge. You can audit what the system produced and why. You can scale the capability without scaling the headcount.

The teams building this way right now are building a structural lead that will be very difficult to close in two or three years.

Risk Pattern

The hidden risk for businesses in the assisted phase is that they are accumulating invisible dependencies. Every workflow that quietly relies on a specific AI tool, a specific model behaviour, or a specific output format is a liability that hasn't been named yet. When the tool changes pricing, when the model updates and behaves differently, when the vendor gets acquired — the dependency surfaces all at once.

Organisations that treat AI as infrastructure define the interface explicitly. They know exactly what goes in and what comes out. They are not dependent on a specific model. They are dependent on a contract. That is a fundamentally different risk profile, and very few teams are managing it that way yet.

The Blockers

Tailwind v4 removed tailwind.config.ts entirely. Custom tokens now live in globals.css using @theme {}. The typography plugin loads via @plugin. This is cleaner once understood, but the mental model shift costs time if you are expecting v3 behaviour.

Next.js 15 made dynamic route params async. Accessing params.slug directly breaks with a runtime error. The fix is const { slug } = await params. One line. It took a 404 to find it.

React 18 Strict Mode caused the admin agent to fire its opening question twice on load. React mounts components twice in development intentionally. A useRef boolean guard prevents the initialisation effect from running more than once regardless of how many times React mounts the component.

Small problems. Real friction. Exactly the kind you only find by shipping.

← All Entries