LLM Product Anti-Patterns

Evan Boyle

In the last four months of building Cortex Click full-time, I’ve tried a lot of LLM products. I’m not talking about ChatGPT, Claude, and other general-purpose apps or developer tools. I’m talking about higher-level software for business users. Products that automate marketing, sales, support, and other business process workflows.

Throughout, I’ve observed many anti-patterns and “own goals”, even in products from companies that have raised $100MM+ in VC funding. This may seem surprising, but the truth is that we’re in the early days of LLM product adoption in the enterprise. FutureSearch’s recent analysis of OpenAI revenue estimates that the majority still comes from consumer subscriptions. API revenue, the portion that represents business built on top of OpenAI, accounts for just 15%. Early days!


There is no cheat sheet for building LLM-driven products. You cannot go to Stack Overflow for UX, product, pricing, and engineering best practices. Everyone, myself included, is figuring this out on the fly.

In this lies the advantage. If you have a strong team of high-agency people with impeccable taste and problem-solving ability, you’re already a step ahead. The solutions you arrive at are likely undiscovered and an order of magnitude better than the competition.

Without further ado, let’s delve into a few of these anti-patterns.


Select a Model

Imagine you walk into the ice cream shop and are met by the attendant:

You: Hello, I would like some ice cream, please.

Attendant: What kind of refrigeration fluid would you like me to put in the machine?

You: Umm…. Aren’t you the expert? Why are you asking me this? I just want some ice cream.

This interaction is obviously not okay. But why do we so commonly accept the same thing in LLM products?

I’m talking about the all too common pattern of asking users to select which LLM model to use as a part of the product workflow.


When you are commanding SaaS margins on top of base LLM costs, the user is paying you for workflow. They are paying you for taste and expertise. They rightfully expect that you are the expert and that you make all of the hard decisions for them that result in the best possible outcome.

At Cortex Click, we are building the highest quality content engine for developer marketing with aims to accelerate every step in the PLG funnel with AI. When new models come out, we look at benchmarks and run extensive evaluations. If we decide to switch to a new model, we update our prompts and run a battery of human evaluations and back tests to ensure that we’re squeezing out the most juice possible for our customers.

The calculus is completely different if you’re selling infrastructure or developer tools to software engineers. Many LLM products selling to business users miss this nuance.


Misguided Feature Gating

Feature gating, or restricting access to certain functionality, is common for products that use seat-based pricing models. You see this even in consumption-based pricing models that restrict access to features like SAML and SSO to enterprise SKUs.

The gating that I've observed in LLM products is much more egregious. Features that increase the quality of output by an order of magnitude are often gated on higher-priced SKUs:


Grounding LLMs in source data provided by the user is probably the single most effective way to boost output quality. Gating this feature means that you have hamstrung yourself during the ever-important evaluation period. The only version of your product that the user will see is one that is 10x worse than what it could be, effectively trading upsell for churn.

Let's go back to the previous "Select a Model" example. Notice anything?


In this instance, I'm on a seven-day free trial and GPT-3.5 and Claude-3 are my only options. Other higher-quality options that would again increase the quality of output by an order of magnitude are only available once I've put in a credit card. Many factors are at play here like protecting your margins against trial abuse and trying to upsell and convert. I'm sympathetic but ultimately there are sounder solutions.

The absolute last thing that you should do when building an LLM product is gate a feature that increases quality. The only thing it will produce is lower conversion rates and increased churn.


Token-Based Pricing

On the surface, token-based pricing provides an easy way to scale costs with consumption, making it simpler to control margins. However, this model can draw an unwanted comparison for business products to the base LLM models they are derived from.


You do not want to make this comparison. To build a sustainable SaaS business you have to charge significantly more than your underlying infrastructure costs to make a profit.

Token-based pricing essentially reduces the product to a commodity, rather than a workflow and outcome-driven solution. Don't undermine your product. Your customers are not simply consuming API calls or tokens; they are leveraging a workflow that drives business outcomes.

The only situation where it makes sense to bill on tokens is if you are building developer tools or underlying infrastructure.


Garbage In, Garbage Out

Product teams mirror LLMs in a fundamental principle: garbage in, garbage out. There is immense pressure building in this industry. Half a dozen seed stage companies launch every day, not to mention that every Fortune 2000 company with massive distribution channels is racing to figure out their AI strategy. The implication is that a ton of companies are raising a ton of money and building products that are not truly exceptional. This rapid pace of development and investment, while exciting, can sometimes lead to products that don't quite hit the mark. The challenge here lies in balancing speed with thoughtful development. We're in the middle of the AI gold rush, and amidst the hype it's important to remember that sustainable success comes not from being first to market, but from delivering products that solve big problems and delight users.