This is part one of a two-part post analyzing the impact of the recent development of AI on a few different companies and industries.
Press Release Season and the Why Now for AI
The massive success of ChatGPT and other generative AI companies has kick-started the Press Release Season (PRS) in the technology. PRS is an industry phenomenon that occurs when a new technology emerges that captivates the hearts and minds of the market. And in 2023, AI has done just that.
But the application of AI and ML is not that new – the incumbent tech companies have been using AI and ML to power their core business for two decades.
So what’s new? Startups such as OpenAI, Anthropic, Stable Diffusion, and Cohere are making pre-trained widely available and at a relatively cheap cost. Most importantly these companies have and continue to invest millions in computing and training costs to create high-performance models that no one software platform outside of the large public tech companies could have done. Additionally, the underlying hardware used for these models has meaningfully improved over the last ten years and has gotten cheaper1.
Clearly, there are a lot of tailwinds that AI is benefitting from. Companies and industries are flocking and will continue to flock to this market. But given how early we are in the adoption and evolution of this new technology, it remains to be seen how it will impact which industry. My mental framework for analyzing what the impact of these technologies will have on industry, market, or company is to ask four questions:
Does the technology strengthen the core customer value proposition?
Does the emergence of the technology create new and formidable competitors?
What is the cost of not adopting the technology?
What is the impact of adopting the technology on the company/industry’s business model?
This framework is largely a derivative of Clayton Christensen Innovator’s Dilemma framework and Michael Maubassin’s writing2. From Measuring the Moat:
Sustaining technologies foster product improvement. They can be incremental, discontinuous, or even radical. But sustaining technologies operate within a defined value network—the “context within which a firm identifies and responds to customers’ needs, solves problems, procures input, reacts to competitors, and strives for profit.” In direct contrast, disruptive technologies offer the market a very different value proposition. Products based on disruptive technologies may initially appeal only to relatively few customers who value features such as low price, smaller size, or greater convenience. Furthermore, Christensen finds that these technologies generally underperform established products in the near term.
Does the technology strengthen the core customer value proposition?
One of my big takeaways from Ben Thompson’s piece is that save Google, the incumbent tech companies likely view AI as a sustaining innovation. I broadly agree with the assessment (and we’ll get to Google in a bit). Incumbents in the digital native and technology business will largely treat AI as a sustaining innovation. In fact for the past decade that has largely been the case! Apple, Google, Facebook, TikTok, Netflix, Uber, and Amazon3 all have used AI and ML to strengthen their core business moats. This is in large part because of the cost associated with training and fine-tuning their models.
But now with a wave of well-funded startups riding the cost curve of compute and democratizing access to high-performance models, non-incumbents can also be the beneficiaries of this technology wave. The impact can easily be observed by the recent announcements of incumbent software companies quickly adopting LLMs and generative AI functionalities.
Take Outreach for example, the company announced they are integrating AI into their product that will auto-generate email copy for sales reps.
Outreach Smart Email Assist is an AI-powered automatic email response generator for sales people. It moves beyond email templates and auto generates accurate and relevant email copy based on the prior context of conversations between buyers and sellers. Sales reps can focus their time on editing and personalizing the AI-generated content, instead of drafting these emails from scratch. Salespeople gain efficiency, without sacrificing quality or the ability to personalize.
Integration of AI into Outreach’s email automation product strengthens the core value proposition, which is to speed up the sales outreach and prospecting process. Similarly, Intercom recently announced that they are incorporating ChatGPT functionality into their customer support platforms. All the features that Intercom launched effectively strengthened the core value proposition for Intercom – automating customer interactions and increasing the productivity of customer support teams.
Both Outreach and Intercom are incumbents in their respective markets. And as incumbents, they will have the benefit of leveraging their existing distribution and, more importantly, customer data to optimize and fine-tune their models and widen their moat4.
And of course, the biggest incumbent of them all, Microsoft announced a wide array of new features that drastically improve core Microsoft Teams’ functionality. The company also plans to integrate OpenAI technology into the core Office products. Tech companies without existing core AI features are quickly adopting the technology to expand and improve their core value propositions.
Does the emergence of the technology create new and formidable competitors or alternatives?
Not all technology businesses are safe. The challenge that some existing businesses face is that the general availability of powerful LLMs actually commoditized the core value proposition to customers. Said another way, the logical losers of OpenAI et al making AI/ML generally available, cheap, and easy to use, are companies who previously built their businesses around previous iterations of this technology.
Take, for example, Gong, which raised $250M at a $7.5B valuation in 2021. The company’s core value proposition is the transcription and use of NLP to extract intelligence and data from sales calls. Transcription as a core feature has been increasingly commoditized for the last five years but until recently the NLP part of the company’s product has been difficult to emulate. However, now that GPT-3.5 has been launched in GA, how long before Gong’s NLP functionality is also commoditized?
Microsoft has already integrated some of this functionality into Microsoft Teams.
The natural product extension could be an integration between Teams and LinkedIn. Such integration could enable ChatGPT to actually confirm to the sales rep if the person they are speaking with is actually the budget owner in the company’s org chart. Similar functionality could be built by ZoomInfo, which now owns Chorus, one of Gong's closest competitors. Until now, Chorus’ functionality on the NLP side has meaningfully lagged Gong, could access to OpenAI’s LLMs help bridge the gap?
This is not to say Gong is SOL but a $7.5B valuation probably means that investors and management expected meaningful and sustained growth in the future. But when new technologies make a company’s core value proposition a commodity, it invites new competitors that can offer a comparable feature set. The impact will most likely be felt in the form of price pressure and potentially higher loss rates on new deals and renewals.
The advantage that Gong has today is that they have a much richer dataset because they have been doing it for longer and they could probably further extend their advantage. Other similarly positioned companies are Grammarly, Jarvis, Copy.ai5, Intercom, and Cresta.
What is the cost of not adopting the technology? And the Google Question
Search is another area where AI might enable new competitors or alternatives. At a base level, the value of search to the average consumer is to find information and answer questions. The traditional search model that Google dominates faces a few new challenges:
The business model is built around charging companies for the right to fill the results page with promoted content and ads.
An entire industry has been built around gaming PageRank and other search algorithms to get ranked/shown up on the page.
Generative AI unfortunately introduces a new challenge to search, the gross volume of articles that can be written programmatically with the specific intent of optimizing for SEO will likely skyrocket.
These factors are increasingly reducing the noise-to-signal ratio for the average consumer. And since many of the results are created with the purpose of selling you something, there's a degree of authenticity that has crept into a search over the years. A popular Hackernews post from about a year ago discusses this issue. The gist of the post is that the increasingly noisy results in Google searches are pushing some consumers towards using more authentic and trusted forums such as Reddit for search6. A comic strip from the aforementioned post, which captures the essence of the argument:
Assuming the author of the post, the comic strip creator, the people that commented on the article, and I are not massive outliers, one implication for Google is that some platforms will use LLMs to create verticalized or community-specific search engines that provide answers based on the company’s existing content and data-sets. The net effect would be that some percentage of search traffic would be lost. It would be not that different from posting a question on Quora today. The key difference, but a very important one, is immediate gratification – users can get answers as quickly as they could from Google. The natural platforms for this will likely be Reddit, Tripadvisor, Yelp, Stackoverflow, Quora, etc.
In this world, when you’re on the bus to a new city, rather than browsing Reddit, Yelp, and Tripadvisor for restaurants and sites to see, you can ask your own personal Concierge to shift through content on those platforms, combine with your own historical preferences and provide recommendations.
Will this be the end of Google? No.
In the short term, this is not a meaningful threat to Google because we’re talking about a subset of queries that need to be addressed and a subset of users that do not want to shift through Google search results. And in fact, ChatGPT is a very poor search replacement because the data is not trained on anything beyond 2021. But that’s typically how disruption starts.
The primary reason that none of this that threatening for Google is that it has a massive distribution advantage. Google is the default search vendor across iOS, Android, Chrome, and most other browsers. The challenge that Google faces is that Microsoft is lurking in the background trying to offer the best of both worlds with Bing.
And the fact that Microsoft is aggressively co-opting the OpenAI and ChatGPT brands over the past few months should not be dismissed. ChatGPT is technically the fastest-growing consumer app of all time. And the company has set the price (and signaled value) of ChatGPT at $20 per month7.
What happens when Bing offers embedded ChatGPT in their search functionality for free, if users change the default search engine to Bing? What % of users would be willing to do that? All while Google debates and tests what percent of their monetizable volume they are willing to cannibalize in order to launch a version of search with curated answers.
And even though Google has >90% market share in search, Bing reportedly sees ~50% of Google’s queries on its platform, especially for the “fat fail” part of the search queries. This means the company probably has a pretty strong dataset to pair with ChatGPT’s existing data.
Is that enough overlapping data for Bing + ChatGPT to train on and deliver a competitive product to Google? TBD. However, Google is clearly focused on this threat and AI-powered search is also not the first time that Google’s search franchise was threatened by a form factor change.
The near-term losers of this new technology in search might just be the Q&A and UGC platforms like StackOverflow, Quora, G2 Crowd, and Tripadvisor. In some ways, the user experience might be tenfold better. Imagine Google Maps can accurately highlight restaurants and sites to see when users ask “What’s there to do in Boise?” based on reviews and questions left on Tripadvisor and Reddit for Boise.
For these UGC companies, the cost of not adopting AI could really be a death nail. Platforms like Quora primarily rely on paywalls and advertising to drive revenue – effectively traffic-based revenue models. Moreover, much of the traffic to these platforms in fact comes from people typing questions into Google. In the future when Google and Bing have adopted LLMs into their platforms (search, maps, voice), the average consumer’s need to go directly to the platforms to find and post answers would be greatly diminished.
Quora’s pivot
So it was interesting to see Quora lean into this new paradigm with the launch of their iOS app, Poe. The co-founder and CEO of Quora had a Twitter Thread announcing the launch that is worth reading in full – a few excerpts below:
Today, we are opening up public access to a new AI product we have been building called Poe. Poe lets people ask questions, get instant answers, and have back-and-forth conversations with several AI-powered bots…
At the same time, we will evolve Quora itself as AI enables better experiences, and we will distribute content created on Poe on Quora when it meets a high enough quality standard, but we are starting from scratch in building Poe as a new product.
We hope that Poe can fill this gap and greatly reduce the amount of work needed for any AI developer to reach a large audience of users. Quora has 400M monthly unique visitors and we’ll be making it easy for all of them to use Poe and to see the best content created on Poe. We are in the process of making an API that will make it easy for any AI developer to plug their model into Poe
As a quick note for Quora creators, we are not currently training the language models in Poe using the writing that everyone has shared on Quora. If we do this in the future, we will allow creators to opt out if they want. We think letting AI learn from what you have written can be a powerful way to help the rest of the world, as many other people will later learn from the AI, similarly to how a teacher might judge their own success not just by how well their students are educated but by all the downstream impact their students have on the world. However, we respect that some people might have a different view and we do not want to impose it on everyone who decides to share their knowledge on Quora.
The company is launching a new separate product not trained on existing Quora data. The natural progression will likely be that Poe will train on Quora’s platform and existing content8. More importantly, the press release is written primarily for developers. The positioning implies that Poe (and Quora) would be a provider of training data for OpenAI, Anthropic, Cohere, and the next generation of LLMs. If this works, it's somewhat brilliant.
Quora could go from being a broadly used but hard-to-monetize Q&A website that is clearly going to be disrupted by ChatGPT-type interfaces and AI-search to becoming a fundamental provider of training data and testing platform for the next wave of LLMs. The pivot is not unheard of, Foursquare underwent a similar transition.
To be fair, Quora can do this in part because (1) believes that there will be a large segment of internet users that will want their questions answered by its community of contributors and (2) it's probably not cannibalizing a massive advertising business so it is a bet worth making.
There’s a whole slew of industries, companies, and founders that will be faced with the Quora dilemma.
A non-exhaustive list of companies I think will face this dilemma in the near to medium term:
Buzzfeed
Getty Images
Fiverr and Upwork
Grammarly
In the next post, I’ll analyze a few of these businesses in the context of the fourth question: What is the impact of adopting the technology on the company/industry’s business model?
Somewhat subsidized by the crypto boom.
Ben Thompson also recently applied this framework to determine the impact of AI on the Big Four tech companies, which I broadly agree with
Google is the behemoth with the search but all FANG and other well-funded tech companies have AI and ML at the center of their platforms — Facebook (news feed), Uber (pricing and matching), Amazon (recommendations, Alexa), TikTok (feed), Netflix (recommendations)
It's unclear today just much data is required to reach parity with an incumbent, its likely domain, and segment specific. There are academic papers that provide guidance but frankly, it's too early to tell or know.
Jarvis and Copy.ai are particularly susceptible to risk because they built the entire business on getting exclusive access to early versions of GPT
Obviously, this is likely limited to a subset of queries (e.g., question and situational-based questions
It’s probably a finger-in-the-air type of pricing on the part of OpenAI but there is probably some % of the internet that is willing to pay that
The press release effectively teed this up, the training will be an opt-out not an opt-in. Like all other tech companies, the opt-out mechanism will likely come in the form of an end-of-the-year email claiming they’re updating their Privacy policies. Side note, Paypal, please stop sending me those privacy notices, I haven’t used your platform in ten years.