There are two ongoing, but divergent trends afoot in enterprise software that are taking up a lot of airspace for companies building in Applied AI. Both are catalyzed by the emergence of Large Language Models (LLMs), and while these trends might not be so different from how software companies have historically been founded, given that all eyes are on AI, they feel more palpable today than ever before:
A reimagination of traditional software
Unlocking new workflows for greenfield use cases
The divergence in these two use cases boils down to LLMs and its power to democratize access to AI and machine learning capabilities. LLMs are turning the traditional applied AI market on its head. While full stack machine learning was once a competitive advantage for teams of highly trained PhDs, it is now just an API call away.
For example, back when I was a Product Manager at Hyperscience, it took us years to acquire the raw training data needed to build proprietary models that beat out traditional optical character recognition (OCR) techniques for document extraction. Today, generally available LLMs and other task specific models can abstract away the need to spend significant time and money building out these workflows all out of the box.
As a result, every company is now an AI company, fostering a new ethos in how products are built.
Emerging Themes in AI-Native Companies
Startups are building AI-native products from the get-go in hopes of either disrupting traditional incumbents with more powerful and productive software, or leveraging AI/ML to build products for net-new use cases once overlooked for being too technically challenging. As a result, we’re seeing products that unearth new experiences and modalities for human: machine interaction.
But even as many companies across both categories have captivated users and quickly grown in popularity, not all that shimmers is gold:
There are fewer barriers to entry in application software today than ever before
This leads to more competition as ideas are commoditized
No company leveraging third-party LLMs has any inherent competitive advantage
As a startup, speed and sales execution is the best way to win another day (and hopefully build towards a moat) and keep fighting
Competition is Heating Up in the Race for Applied AI Dominance
As the price to develop and integrate applications with AI/ML drops, so do the barriers to entry, leading to a sea of competitors across both existing and net new use cases.
While we already knew that no startup idea is truly unique, we’re seeing an influx of founders scanning the market to find use cases they can attach AI capabilities to. To this extent, we’re seeing a concentration of companies emerge across the different functions of sales, marketing, code generation, API generation, video generation, and many more use cases. Not to say these companies won’t be successful, but founders need to be aware of the number of new entrants trying to capitalize on the same AI paradigm shift.
In the past generation of SaaS companies, those that entered competitive markets aimed to differentiate from the pack by expanding their product surface area to own more of the workflow or by creating a unique GTM motion:
For example, our portfolio company Dialpad operates in the highly competitive Unified Communications (UCaaS) space against incumbents like Avaya and RingCentral. In the early days, Dialpad introduced a modern communications product that emphasized the importance of an enhanced user experience. Paired with faster development cycles than sleepy incumbents, the company ate away at incumbent market share, and crossed the $200M+ ARR mark back in 2023.
Similarly, our portfolio company Socure did the heavy lifting to build an enterprise grade product from day one. This enabled them to authentically build trust with early adopters like Bank of America. By landing several bulge-bracket banks early in their GTM journey, their base ML-model continuously improved and the company had a stronger story for each net-new customer pitch within the Fortune 500 customer segment.
However, today, many AI-native companies are taking a different route: they’re going after niche wedges in big markets, seeking to capture key customer data that will over time help improve their product’s value as they aim to automate more and more of the customer’s workflow and show customers more value relative to the competition. On top of that, many AI-native companies are aiming to differentiate on the model level by leveraging techniques like Retrieval Augmented generation (RAG), finetuning models with specific datasets, or by daisy chaining LLMs & SLMs together for a specific business need.
The only problem? It’s almost exactly what every AI-native company I meet brings up: start with accessible models, add customer data to refine model outputs, and over time acquire enough data through your unique workflow to create your own model, and finally have true competitive advantage by owning the data layer.
The LLM Adoption Curve
To that point, every applied AI company falls somewhere on what I’m calling the LLM Adoption Curve:
While the above looks like a great strategy in theory, in practice the competition is heating up and sees a similar future for their products.
To determine how to truly differentiate AI companies from the growing herd, here are a few key questions I’m thinking about:
When is the right time for startups to churn off of generally available LLMs and make their own models?
Is there a future where generally available LLMs like GPT-5, Claude 4, or DBRX-2 will be good enough that companies don’t need to RAG, fine tune, or even build their own?
Can companies even survive long enough to build their own models? Our portfolio company Dialpad built DialpadGPT off of 5 billion minutes and 10+ years of proprietary customer data.
Many startups are aiming to unseat scale-ups, not even true legacy incumbents. So, how do scaleups and incumbents pivot to add their own AI twist?
How can companies better evaluate build vs. buy frameworks at their target ICP to better understand if there’s real venture-scale opportunity here?
How do agents impact all this?
Investing in a net-new AI startup going head-to-head with an incumbent with distribution sounds like a losing battle. In the future, do companies need to own enough of the data and workflow layer from the outset to train their own model and become a proprietary player? Or will models be so generalizable that you don’t need to?
As we continue to dig into these questions, we’re eager to speak with more founders, practitioners, and builders in the category. And if you’re an early stage startup building in AI, we’d love to hear from you - send me a note!