PureSEM Blog

AI content doesn't need better prompts. It needs better context.

Written by Keith Holloway | May 14, 2026 2:31:08 PM

I was hitting a crisis moment every other day.

Before client meetings, I spent a frantic hour scrambling to pull everything together so I wouldn't get caught off guard. How's performance? Did that tracking problem get fixed? What's the content plan? What was done on the technical side? Where are we on the roadmap?

It was too much, and it had to stop.

That might sound like a management problem. I saw it as a systems problem.

PureSEM's professional services do everything on the search and inbound side for clients:

Content development, content performance, organic and AI search performance, internal linking, website optimization, paid search management, and full-funnel attribution.

Every week, before a meeting, the lead on each account had to gather work spread across team members and systems.

ClickUp for project management. Our own platform for Google Search Console, GA4, Google Ads, LinkedIn, connected CRM data, our content calendar, the inbound links work, and then SEMrush for rank tracking. The work itself was good. Pulling it together into a coherent client conversation, in under an hour, every time, was brutal.

Sure, we could hire dedicated account managers like most agencies, but that would just add overhead to delivery and create a layer of friction between clients and experts, which clients naturally dislike. We considered it. We wanted to avoid it.

That frustration pushed us to see if we could connect everything into Claude and make it a real account manager. That's a story for another post.

But it opened up something I'd been circling around for a long time: the problem of AI-generated content that sounds like AI-generated content.

Those two problems converged. 

 

Introducing VERA

The content problem is what this post is about. And it's everywhere right now.

Tools like Claude and ChatGPT can write fluently today. Grammatically clean, structurally coherent.

The problem is that without context, the writing has nothing to say that hasn't already been said.

Ask AI to write about expense management software with no context, and it writes a version of every other article ever published about expense management software.

That's not a quality problem; it's a math problem.

Generic input produces generic output.

I saw this in person at SEO IRL in Toronto in the fall of 2025. SEO experts were showing off tools that could pick any topic, generate subtopics, pull keywords, search for every keyword, scrape the top 10 results, extract those articles, build a combined outline based on what the top 10 said, and write a new article from that outline. Then publish it.

That's what people were doing manually ten years ago, and it was producing fantastic results. Now it's being fully automated.

Where do we think that is going to end up? What value is that adding?

It's just regurgitating everything that already exists.

Google has been battling this for years, and in March 2026, it released the strongest update yet to specifically target content that adds nothing new to the conversation. The AI mills are running into the wall they built.

The writing quality is not the problem. It never was.

 

Where this actually started: a keyword categorization problem from five years ago

Before we ever thought about VERA, we had a different problem.

If you're managing SEO for a busy website, it might be ranking for 10,000 to 50,000 keywords. And the keywords grow by 5-10% every month.

You can't manage performance at the keyword level. You have to group them into clusters that matter to the business.

To do that grouping accurately, you need AI. But early AI was terrible at this, because it didn't know anything about the brand. When a case study page drove traffic using a customer's company name, the AI had no way to tell whether that name was a customer, a partner, a competitor, or a product integration. It would categorize it wrong.

So we built a system we called BICA: brand intelligence and competitive analysis. We had to teach the AI what the brand was. Its products. The common ways people referred to it. The leaders and key staff. What it's integrated with. Who its partners were. Who its customers were, because customer names were all over the website in case studies.

We had to know whether a company like Microsoft, appearing in a client's content, was a competitor, an integration partner, or a customer.

The AI needed to be taught these distinctions before it could accurately categorize anything.

In building what is now our intent-analytics system, which categorizes keywords, impressions, traffic, conversions, and leads across the entire funnel, we learned that accuracy comes from building layer upon layer of context. Not just better prompts. Layers of prompts, tables, and data, each verified before it becomes input to the next layer. This is how you eliminate errors and create repeatable, accurate outcomes.

Then we realized something that changed everything we were doing.

The same context-layering that made keyword categorization work also made content writing work. If teaching the AI about the brand's fixed categorization, teaching it everything we knew about the brand, the market, the customer, and the strategy would fix writing,  too.

The pyramid wasn't designed. It was discovered, layer by layer, every time we ran into what AI couldn't do without being told.

 

What a good writer actually does

When I think about what makes a real writer good at this, it clarifies the problem fast.

A good writer understands the product and the customer.

They understand the markets.

They know who they're talking to.

If they're doing it right, they've interviewed subject matter experts. They've sat with the people inside the company who actually know how the product works, who actually talk to customers every day, who actually understand the nuances of the market. That original human input is not optional. It's the difference between writing something only this company could have written and writing something any company could have written.

They have a messaging strategy. They have a content plan. They know which keywords they're targeting.

And they bring years of voice and craft. Content frameworks built from thousands of edits. An instinct for what a specific person at a specific point in a buying process needs to hear to move.

But that alone does not get inbound SEO results. In the ideal environment, this expert writer is working with an experienced SEO architect who provides the knowledge of the site's technical structure: where the internal links need to go, which hub pages need to be built up, which existing pages are competing against each other.

Most writers don't have all of that. Most agencies don't either. Most AI tools don't even try to have it.

The pyramid is what you get when you try.

 

The six layers

A generalization of the context required to develop great content.  

Intelligence is the foundation.

This is where BICA lives: the Brand Intelligence and Competitive Analysis system we built to teach our AI about the company's entire ecosystem to power our intent-based analytics software. This is also the foundational knowledge before any content is written. Products, people, customers, partners, competitors, integrations. All of it. Without this layer, every layer above it produces content that could have been written for anyone.

It's also where market selection, persona research, and Subject Matter Expert (SME) interviews live. Real personas built from buyer pain and outcome data. Transcripts from SME interviews to capture the things that only the company actually knows:

The observations, the war stories, the specifics that can never appear in a generic AI article because they don't exist anywhere else. This is where original human input enters the system. Without it, everything above this layer is reorganizing what's already on the internet.

Positioning sits on top of that.

Product messaging has to be documented and clear before a single brief is written. The entity strategy has to be defined. Most companies have no deliberate plan for how AI search engines should understand them, and the result is that they get classified generically and have no chance of ranking for anything competitive. Strategic topics and subtopics are deliberately chosen here based on where buyer intent actually lives.

The Plan is the content strategy: which topics get covered, at what funnel stage, for which persona, targeting which keyword clusters.

This is where most teams break. They open a blank page and ask, "What should we write about today?" Whatever comes to mind goes on the calendar. Six months later, they've written the same article four different ways without realizing it.

We don't do that. Our content strategy is modular. Every topic has subtopics. Every subtopic gets crossed with personas. Every persona has internal struggles, external problems, and philosophical questions at different stages of the buying cycle, from not-yet-aware to actively evaluating. That matrix produces an almost infinite number of legitimate angles to write from, every one of which is grounded in something a real buyer is actually trying to solve.

The modular content strategy ensures every piece of content is unique,  targeted to a single person with a specific problem at a specific time in their journey. And it's how it all fits together into a cohesive strategy, supporting the entity strategy.

Every piece of content gets a keyword target assigned before it's briefed, not after. Most teams reverse this. The content calendar reflects the strategy, not a list of ideas someone came up with on a Tuesday.

Voice and craft are what turn layered context into great pieces of writing that sound like the company and are interesting to read.

The prompt is not the craft. Prompts live in every layer of this pyramid. There are prompts at the intelligence layer, prompts  at the positioning layer, and prompts inside the planning layer. Each one builds on what the layer below has produced.

The pyramid metaphor is intentional: if any brick is missing or weak, the structure above it doesn't stand.

What this layer adds is the targeting and the human craft.

Brand guidelines. Client voice profiles built from real writing examples. Content frameworks developed over years of B2B work. And our dictionary of phrases and patterns to avoid, because they make content look like it was produced by a machine.

When all of this is built up correctly, you take the original human input from the intelligence layer, and you target it at a very specific person, with a very specific problem, at a very specific moment in their buying process. That's when content resonates. When it feels emotional. When someone reads it and thinks the writer was inside their head. That's what good content does, and it's only possible because of everything underneath it.

A great prompt sitting atop nothing produces fluent mediocrity.

The same prompt, sitting atop a fully built intelligence and positioning layer, produces something completely different.

Structure is internal link mapping, SEO architecture, and optimized metadata.

This is where most content operations get sloppy, and it's where the PureSEM software does the heaviest lifting in our system.

Our software has crawled the client's entire website. It has all the existing internal links and content performance. It knows the full keyword universe, the current state and the ideal state.

It knows the strategic topics, the primary keywords inside each topic, and the targeted keywords on every single page of the site.

When VERA writes an article, it has the entire database and strategy available in real time to identify where internal links should go and which pages those links should support.

SEO architecture is baked in. Optimized metadata draws from the same knowledge of the keyword universe. The Content Hub Manager, which we recently released, embeds the entire SEO strategy from BICA through topics through primary keywords through the content hub itself.

What used to live inside our professional services brains is now part of the software. VERA can generate everything from top to bottom, with human inputs still going in at the intelligence layer and final layers.

A typical writer doesn't know about any of this. They have to work with an SEO professional who provides the strategy, the keyword universe, the link plan. When you build an AI system the way we have, you can bake the SEO strategy, the SEO data, and the link building structure right into the content plan itself.

Published is the content itself and all the social remixes. 

Nothing gets published without human review.

That's not negotiable, and it never will be. But the nature of that review has shifted in the last twelve months. We used to send a draft to a client and spend days going back and forth on revisions. Now, increasingly, we send a draft over, and the client's expert reads it and tells us it's the best content they've seen on the topic. The clients themselves are often surprised at how little they need to change.

Then there's image generation, metadata, social adaptation. All of it being compressed by additional systems sitting on top of the same context layers.

 

The flywheel: where it keeps it producing

The pyramid describes the foundation. The flywheel is what runs across it continuously.

In our app, content is being scored and tiered automatically. Internal links are mapped. Technical issues surfaced, tracked, and monitored to completion and mapped against pages for ranking correlations. 

We're watching which topics are gaining or losing visibility in AI search. We're tracking what's being cited. We're identifying new citation opportunities as they appear.

And we're refreshing content, which is arguably the largest content opportunity most B2B sites are sitting on.

Many of our clients have ten or twenty years of accumulated blog posts. Most of it is out of date. Some of it is actively misleading. All of it is eating Google's crawl budget and confusing AI systems trying to understand what the company is about today.

Our software looks at every piece of existing content, finds its last refresh date, evaluates whether it still fits the current strategy, and runs a decode process that identifies what should be refreshed, what should be redirected, and what should be deleted.

The goal: every page on the site reviewed in the last year, every page on-strategy, every page properly internally linked, every page actually useful to the customer.

This is where most AI tools fall apart, and it's worth being direct about it.

There's a whole category of platforms right now that look impressive in a demo but stop at the dashboard.

They tell you what's wrong. They can't ground their recommendations in your actual data, your actual content, your actual site structure, your actual strategy.

Generic AI input produces generic recommendations. A tool that doesn't know your keyword universe, your entity strategy, your customer pain points, and your full content inventory cannot give you advice that's worth acting on. It can only give you advice that looks credible.

The flywheel only works when the system is continuously grounded in your data. That's not a feature. It's the entire game.

 

What we're still figuring out

Here's what's not solved.

Context storage is messy.

In our earlier iterations, client-specific context — personas, entity strategies, market intelligence, all of it — lived as Markdown files uploaded to Claude projects. When something changed, a new file was generated, uploaded to the right project, and someone had to remember to delete the old one. Across a roster of clients, that was too much mental load. We had spreadsheets to track which clients had which pieces of the pyramid uploaded and what version numbers. It was getting out of control.

That's not a system.

The solution was obvious; we already had a complete multi-tenant architecture in Google Cloud powering the PureSEM platform. We're now building out the full context store within, so every client has versioned personas, brand guidelines, customer voice profiles, and entity strategies in structured tables, always current, always the version VERA reads, with fast retrieval. It's being built now.

The friction tax is steep.

Producing a final piece of content still involves moving files between VERA, Google Docs, our clients' folders, and whatever CMS the client is on. Each hop is small. Together they eat hours. Copying Markdown into Google Docs, sharing links, making edits, preparing images, loading into CMSs, adapting for LinkedIn, and handling metadata. We're working on reducing it. We're not done.

The self-improvement loops are still in prototype, but there are several of them.

We're building toward a system that continuously improves.

The first is our AI visibility tracking which runs on daily and analyzes citations and query fanouts across AI search engines to identify exactly where a brand is showing up and where it isn't. The content opportunities are too numerous to count. Sure, we could add them all to the content calendar but it would explode the strategy.

The second is the set of automated weekly performance reports we already run, for Google Analytics, Google Search Console data, the AI search visibility data and the system we built for rank tracking that will replace SEMRush. Vera is already providing automated analysis of every channel, every week using the same context systems it uses for content. We're building another layer of analysis on top of each channel analysis to tie it all together for the CMO.

The third is the daily meditation agent, an idea we owe to Dave Shanley at ContentCamel.io. It runs every night, reflects on everything it did that day, scores how well it did it, and sends a morning summary. It's exactly what the best version of VERA should do across every client. There's still a lot to figure out here.

The forth, and the one we're building toward now, is feeding all of this back into the system as recommendations. The challenge at this stage is not generating recommendations. The challenge is triaging them. Without good triage, the system overwhelms the operator with too much to act on.

The next phase is making sure only the very best recommendations surface, in priority order, in the workflow where they can actually be acted on.

What we're really building, beyond content generation, is a fully automated, self-improving AI search system.

There is so much more to do.

 

Why we're writing about it now

This is not an announcement of a finished product. It's not a pitch. It's a field report from the middle of the work.

If you read this and decide to build it yourselves, the map is here. Of course, if this interests you and you would like help with your content, drop us a line.

A few weeks ago, Jeff showed me a 44-page mock-up of a new PureSEM website he'd built in about two hours. It had a VERA chatbot built into it. I had the kind of reaction I haven't had at my computer in a long time. We could barely talk fast enough to keep up with what we were imagining.

We're barely sleeping right now. We can't build this fast enough. That is both a problem and the most accurate description of where we are.

 

If you want to see where your company currently shows up in AI search, we run free AI Search Visibility Assessments for qualifying companies. It takes two minutes to request, and we'll send you a custom analysis and recommendations. 

We also publish updates on what we're building and what we're learning in the PureSEM newsletter. 

 

 

The original draft of this post was written by VERA with all the context described above, including transcriptions of conversations between Jeff and Keith.