AI

When it comes to large language models, should you build or buy?

Comment

Gift bags for guests to a child's party decorated to look like a robot head.
Image Credits: Jenny Dettrick (opens in a new window) / Getty Images

Tanmay Chopra

Contributor

Tanmay Chopra works in machine learning at AI search startup Neeva, where he wrangles language models large and small. Previously, he oversaw the development of ML systems globally to counter violence and extremism on TikTok.

Last summer could only be described as an “AI summer,” especially with large language models making an explosive entrance. We saw huge neural networks trained on a massive corpora of data that can accomplish exceedingly impressive tasks, none more famous than OpenAI’s GPT-3 and its newer, hyped offspring, ChatGPT.

Companies of all shapes and sizes across industries are rushing to figure out how to incorporate and extract value from this new technology. But OpenAI’s business model has been no less transformative than its contributions to natural language processing. Unlike almost every previous release of a flagship model, this one does not come with open-source pretrained weights — that is, machine learning teams cannot simply download the models and fine-tune them for their own use cases.

Instead, they must either pay to use them as-is, or pay to fine-tune the models and then pay four times the as-is usage rate to employ it. Of course, companies can still choose other peer open-sourced models.

This has given rise to an age-old corporate — but entirely new to ML — question: Would it be better to buy or build this technology?

It’s important to note that there is no one-size-fits-all answer to this question; I’m not trying to provide a catch-all answer. I mean to highlight pros and cons of both routes and offer a framework that might help companies evaluate what works for them while also providing some middle paths that attempt to include components of both worlds.

Buying: Fast, but with clear pitfalls

Let’s start with buying. There are a whole host of model-as-a-service providers that offer custom models as APIs, charging per request. This approach is fast, reliable and requires little to no upfront capital expenditure. Effectively, this approach de-risks machine learning projects, especially for companies entering the domain, and requires limited in-house expertise beyond software engineers.

Projects can be kicked off without requiring experienced machine learning personnel, and the model outcomes can be reasonably predictable, given that the ML component is being purchased with a set of guarantees around the output.

Unfortunately, this approach comes with very clear pitfalls, primary among which is limited product defensibility. If you’re buying a model anyone can purchase and integrate it into your systems, it’s not too far-fetched to assume your competitors can achieve product parity just as quickly and reliably. That will be true unless you can create an upstream moat through non-replicable data-gathering techniques or a downstream moat through integrations.

What’s more, for high-throughput solutions, this approach can prove exceedingly expensive at scale. For context, OpenAI’s DaVinci costs $0.02 per thousand tokens. Conservatively assuming 250 tokens per request and similar-sized responses, you’re paying $0.01 per request. For a product with 100,000 requests per day, you’d pay more than $300,000 a year. Obviously, text-heavy applications (attempting to generate an article or engage in chat) would lead to even higher costs.

You must also account for the limited flexibility tied to this approach: You either use models as-is or pay significantly more to fine-tune them. It is worth remembering that the latter approach would involve an unspoken “lock-in” period with the provider, as fine-tuned models will be held in their digital custody, not yours.

Building: Flexible and defensible, but expensive and risky

On the other hand, building your own tech allows you to circumvent some of these challenges.

In most cases, “building” refers to leveraging and fine-tuning open sourced backbones not building from scratch (although that also has its place). This approach grants you exponentially greater flexibility for everything from modifying model architectures to reducing serving latencies through distilling and quantization.

It’s worth remembering that while purchased models might be impressive at many tasks, models trained in-house may well achieve sizable performance improvements on a specific task or domain. At scale, these models are much cheaper to deploy and can lead to the development of significantly defensible products that can take competitors much longer to replicate.

The most prominent example of this is the TikTok recommendation algorithm. Despite much of its details being publicly available in various research papers, even massive ML teams at its competitors are yet to replicate and deploy a similarly effective system.

Of course, there are no free lunches: Developing, deploying and maintaining elaborate machine learning systems in-house requires data engineering, machine learning and DevOps expertise, all of which are scarce and highly sought-after. Obviously, that requires high upfront investment.

The success of machine learning projects is also less predictable when you’re building them in-house, and some estimates put the likelihood of success at around the 20% mark. This may prolong the time-to-market.

All in all, while building looks extremely attractive in the long run, it requires leadership with a strong appetite for risk over an extended time period as well as deep coffers to back said appetite.

The middle road

That said, there are middle ground approaches that attempt to balance these positives and negatives. The first and most oft-discussed is prompt engineering.

This approach starts with buying then building a custom input template that serves to replace fine-tuning in some sense. It aims to guide the off-the-shelf model with clear examples or instructions, creating a middling level of defensibility in the form of custom prompts while retaining the benefits of buying.

Another way is to seek open source alternative backbones of closely equivalent quality and build atop them. This reduces upfront costs and lack of output predictability to some extent while retaining the flexibility offered by building. For example, GPT-J and GPT-Neo are two open source alternatives to GPT-3.

A slightly more intricate and newer approach is closed source approximation. This involves attempting to train an in-house model that aims to minimize the difference between GPT-3 and its own output, either terminally or at an earlier embedding stage. This will reduce time-to-market by leveraging GPT-3 in the short term and then transitioning to in-house systems as their quality improves in the long term to enable cost optimization and defensibility.

Still confused about which way to go? Here’s a three-question quiz:

Are you currently an AI business?

If yes, you’ll need to build to maintain defensibility.

If you’re not, buy for now and prompt engineering to tailor the model to your use cases.

If you want to be an AI business, work toward that over time: store data cleanly, start building an ML team and identify monetizable use cases.

Is your use case addressed by existing pre-trained models?

Can you simply afford to buy without putting in much additional work? If so, you should probably shell out the cash if time-to-market is a factor.

Building is not fast, easy or cheap. This is especially true if your use case is non-monetizable or you need a model for internal use.

Do you have unpredictable or extremely high request latency?

If yes, buying might not be economically feasible, especially in a consumer setting. That said, be realistic — quantify your request latency and buying costs to whatever extent possible. Building can be deceptively expensive, especially because you’ll need to hire ML engineers, buy tooling and pay for hosting.

Hopefully, this helps you kick off your journey!

More TechCrunch

The best known mycoprotein is probably Quorn, a meat substitute that’s fast approaching its 40th birthday. But Finnish biotech startup Enifer is cooking up something even older: Its proprietary single-cell…

Meet the Finnish biotech startup bringing a long lost mycoprotein to your plate

Silo, a Bay Area food supply chain startup, has hit a rough patch. TechCrunch has learned that the company on Tuesday laid off roughly 30% of its staff, or north…

Food supply chain software maker Silo lays off ~30% of staff amid M&A discussions

Featured Article

Meta’s new AI council is composed entirely of white men

Meanwhile, women and people of color are disproportionately impacted by irresponsible AI.

10 hours ago
Meta’s new AI council is composed entirely of white men

If you’ve ever wanted to apply to Y Combinator, here’s some inside scoop on how the iconic accelerator goes about choosing companies.

Garry Tan has revealed his ‘secret sauce’ for getting into Y Combinator

Indian ride-hailing startup BluSmart has started operating in Dubai, TechCrunch has exclusively learned and confirmed with its executive. The move to Dubai, which has been rumored for months, could help…

India’s BluSmart is testing its ride-hailing service in Dubai

Under the envisioned framework, both candidate and issue ads would be required to include an on-air and filed disclosure that AI-generated content was used.

FCC proposes all AI-generated content in political ads must be disclosed

Want to make a founder’s day, week, month, and possibly career? Refer them to Startup Battlefield 200 at Disrupt 2024! Applications close June 10 at 11:59 p.m. PT. TechCrunch’s Startup…

Refer a founder to Startup Battlefield 200 at Disrupt 2024

Social networking startup and X competitor Bluesky is officially launching DMs (direct messages), the company announced on Wednesday. Later, Bluesky plans to “fully support end-to-end encrypted messaging down the line,”…

Bluesky now has DMs

The perception in Silicon Valley is that every investor would love to be in business with Peter Thiel. But the venture capital fundraising environment has become so difficult that even…

Peter Thiel-founded Valar Ventures raised a $300 million fund, half the size of its last one

Featured Article

Spyware found on US hotel check-in computers

Several hotel check-in computers are running a remote access app, which is leaking screenshots of guest information to the internet.

14 hours ago
Spyware found on US hotel check-in computers

Gavet has had a rocky tenure at Techstars and her leadership was the subject of much controversy.

Techstars CEO Maëlle Gavet is out

The struggle isn’t universal, however.

Connected fitness is adrift post-pandemic

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the first months of 2024. Smaller-sized…

15 hours ago
A comprehensive list of 2024 tech layoffs

HoundDog actually looks at the code a developer is writing, using both traditional pattern matching and large language models to find potential issues.

HoundDog.ai helps developers prevent personal information from leaking

The changes are designed to enhance the consumer experience of using Google Pay and make it a more competitive option against other payment methods.

Google Pay will now display card perks, BNPL options and more

Few figures in the tech industry have earned the storied reputation of Vinod Khosla, founder and partner at Khosla Ventures. For over 40 years, he has been at the center…

Vinod Khosla is coming to Disrupt to discuss how AI might change the future

AI has already started replacing voice agents’ jobs. Now, companies are exploring ways to replace the existing computer-generated voice models with synthetic versions of human voices. Truecaller, the widely known…

Truecaller partners with Microsoft to let its AI respond to calls in your own voice

Meta is updating its Ray-Ban smart glasses with new hands-free functionality, the company announced on Wednesday. Most notably, users can now share an image from their smart glasses directly to…

Meta’s Ray-Ban smart glasses now let you share images directly to your Instagram Story

Spotify launched its own font, the company announced on Wednesday. The music streaming service hopes that its new typeface, “Spotify Mix,” will help Spotify distinguish its own unique visual identity. …

Why Spotify is launching its own font, Spotify Mix

In 2008, Marty Kagan, who’d previously worked at Cisco and Akamai, co-founded Cedexis, a (now-Cisco-owned) firm developing observability tech for content delivery networks. Fellow Cisco veteran Hasan Alayli joined Kagan…

Hydrolix seeks to make storing log data faster and cheaper

A dodgy email containing a link that looks “legit” but is actually malicious remains one of the most dangerous, yet successful, tricks in a cybercriminal’s handbook. Now, an AI startup…

Bolster, creator of the CheckPhish phishing tracker, raises $14M led by Microsoft’s M12

If you’ve been looking forward to seeing Boeing’s Starliner capsule carry two astronauts to the International Space Station for the first time, you’ll have to wait a bit longer. The…

Boeing, NASA indefinitely delay crewed Starliner launch

TikTok is the latest tech company to incorporate generative AI into its ads business, as the company announced on Tuesday that it’s launching a new “TikTok Symphony” AI suite for…

TikTok turns to generative AI to boost its ads business

Gone are the days when space and defense were considered fundamentally antithetical to venture investment. Now, the country’s largest venture capital firms are throwing larger portions of their money behind…

Space VC closes $20M Fund II to back frontier tech founders from day zero

These days every company is trying to figure out if their large language models are compliant with whichever rules they deem important, and with legal or regulatory requirements. If you’re…

Patronus AI is off to a magical start as LLM governance tool gains traction

Link-in-bio startup Linktree has crossed 50 million users and is rolling out the beta of its social commerce program.

Linktree surpasses 50M users, rolls out its social commerce program to more creators

For a $5.99 per month, immigrants have a bank account and debit card with fee-free international money transfers and discounted international calling.

Immigrant banking platform Majority secures $20M following 3x revenue growth

When developers have a particular job that AI can solve, it’s not typically as simple as just pointing an LLM at the data. There are other considerations such as cost,…

Unify helps developers find the best LLM for the job

Response time is Aerodome’s immediate value prop for potential clients.

Aerodome is sending drones to the scene of the crime