AI

When it comes to large language models, should you build or buy?

Comment

Gift bags for guests to a child's party decorated to look like a robot head.
Image Credits: Jenny Dettrick (opens in a new window) / Getty Images

Tanmay Chopra

Contributor
Tanmay Chopra works in machine learning at AI search startup Neeva, where he wrangles language models large and small. Previously, he oversaw the development of ML systems globally to counter violence and extremism on TikTok.

Last summer could only be described as an “AI summer,” especially with large language models making an explosive entrance. We saw huge neural networks trained on a massive corpora of data that can accomplish exceedingly impressive tasks, none more famous than OpenAI’s GPT-3 and its newer, hyped offspring, ChatGPT.

Companies of all shapes and sizes across industries are rushing to figure out how to incorporate and extract value from this new technology. But OpenAI’s business model has been no less transformative than its contributions to natural language processing. Unlike almost every previous release of a flagship model, this one does not come with open-source pretrained weights — that is, machine learning teams cannot simply download the models and fine-tune them for their own use cases.

Instead, they must either pay to use them as-is, or pay to fine-tune the models and then pay four times the as-is usage rate to employ it. Of course, companies can still choose other peer open-sourced models.

This has given rise to an age-old corporate — but entirely new to ML — question: Would it be better to buy or build this technology?

It’s important to note that there is no one-size-fits-all answer to this question; I’m not trying to provide a catch-all answer. I mean to highlight pros and cons of both routes and offer a framework that might help companies evaluate what works for them while also providing some middle paths that attempt to include components of both worlds.

Buying: Fast, but with clear pitfalls

Let’s start with buying. There are a whole host of model-as-a-service providers that offer custom models as APIs, charging per request. This approach is fast, reliable and requires little to no upfront capital expenditure. Effectively, this approach de-risks machine learning projects, especially for companies entering the domain, and requires limited in-house expertise beyond software engineers.

Projects can be kicked off without requiring experienced machine learning personnel, and the model outcomes can be reasonably predictable, given that the ML component is being purchased with a set of guarantees around the output.

Unfortunately, this approach comes with very clear pitfalls, primary among which is limited product defensibility. If you’re buying a model anyone can purchase and integrate it into your systems, it’s not too far-fetched to assume your competitors can achieve product parity just as quickly and reliably. That will be true unless you can create an upstream moat through non-replicable data-gathering techniques or a downstream moat through integrations.

What’s more, for high-throughput solutions, this approach can prove exceedingly expensive at scale. For context, OpenAI’s DaVinci costs $0.02 per thousand tokens. Conservatively assuming 250 tokens per request and similar-sized responses, you’re paying $0.01 per request. For a product with 100,000 requests per day, you’d pay more than $300,000 a year. Obviously, text-heavy applications (attempting to generate an article or engage in chat) would lead to even higher costs.

You must also account for the limited flexibility tied to this approach: You either use models as-is or pay significantly more to fine-tune them. It is worth remembering that the latter approach would involve an unspoken “lock-in” period with the provider, as fine-tuned models will be held in their digital custody, not yours.

Building: Flexible and defensible, but expensive and risky

On the other hand, building your own tech allows you to circumvent some of these challenges.

In most cases, “building” refers to leveraging and fine-tuning open sourced backbones not building from scratch (although that also has its place). This approach grants you exponentially greater flexibility for everything from modifying model architectures to reducing serving latencies through distilling and quantization.

It’s worth remembering that while purchased models might be impressive at many tasks, models trained in-house may well achieve sizable performance improvements on a specific task or domain. At scale, these models are much cheaper to deploy and can lead to the development of significantly defensible products that can take competitors much longer to replicate.

The most prominent example of this is the TikTok recommendation algorithm. Despite much of its details being publicly available in various research papers, even massive ML teams at its competitors are yet to replicate and deploy a similarly effective system.

Of course, there are no free lunches: Developing, deploying and maintaining elaborate machine learning systems in-house requires data engineering, machine learning and DevOps expertise, all of which are scarce and highly sought-after. Obviously, that requires high upfront investment.

The success of machine learning projects is also less predictable when you’re building them in-house, and some estimates put the likelihood of success at around the 20% mark. This may prolong the time-to-market.

All in all, while building looks extremely attractive in the long run, it requires leadership with a strong appetite for risk over an extended time period as well as deep coffers to back said appetite.

The middle road

That said, there are middle ground approaches that attempt to balance these positives and negatives. The first and most oft-discussed is prompt engineering.

This approach starts with buying then building a custom input template that serves to replace fine-tuning in some sense. It aims to guide the off-the-shelf model with clear examples or instructions, creating a middling level of defensibility in the form of custom prompts while retaining the benefits of buying.

Another way is to seek open source alternative backbones of closely equivalent quality and build atop them. This reduces upfront costs and lack of output predictability to some extent while retaining the flexibility offered by building. For example, GPT-J and GPT-Neo are two open source alternatives to GPT-3.

A slightly more intricate and newer approach is closed source approximation. This involves attempting to train an in-house model that aims to minimize the difference between GPT-3 and its own output, either terminally or at an earlier embedding stage. This will reduce time-to-market by leveraging GPT-3 in the short term and then transitioning to in-house systems as their quality improves in the long term to enable cost optimization and defensibility.

Still confused about which way to go? Here’s a three-question quiz:

Are you currently an AI business?

If yes, you’ll need to build to maintain defensibility.

If you’re not, buy for now and prompt engineering to tailor the model to your use cases.

If you want to be an AI business, work toward that over time: store data cleanly, start building an ML team and identify monetizable use cases.

Is your use case addressed by existing pre-trained models?

Can you simply afford to buy without putting in much additional work? If so, you should probably shell out the cash if time-to-market is a factor.

Building is not fast, easy or cheap. This is especially true if your use case is non-monetizable or you need a model for internal use.

Do you have unpredictable or extremely high request latency?

If yes, buying might not be economically feasible, especially in a consumer setting. That said, be realistic — quantify your request latency and buying costs to whatever extent possible. Building can be deceptively expensive, especially because you’ll need to hire ML engineers, buy tooling and pay for hosting.

Hopefully, this helps you kick off your journey!

More TechCrunch

Boeing’s Starliner spacecraft has successfully delivered two astronauts to the International Space Station, a key milestone in the aerospace giant’s quest to certify the capsule for regular crewed missions.  Starliner…

Boeing’s Starliner overcomes leaks and engine trouble to dock with ‘the big city in the sky’

Rivian needs to sell its new revamped vehicles at a profit in order to sustain itself long enough to get to the cheaper mass market R2 SUV on the road.

Rivian’s path to survival is now remarkably clear

Featured Article

What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

Apple is hoping to make WWDC 2024 memorable as it finally spells out its generative AI plans.

2 hours ago
What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

In a research note, HSBC estimates that the Indian edtech giant Byju’s, once valued at $22 billion, is now worth nothing.

HSBC believes that $22 billion Byju’s is now worth zero

As WWDC 2024 nears, all sorts of rumors and leaks have emerged about what iOS 18 and its AI-powered apps and features have in store.

What to expect from Apple’s AI-powered iOS 18 at WWDC 2024

Apple’s annual list of what it considers the best and most innovative software available on its platform is turning its attention to the little guy.

Apple’s Design Awards highlight indies and startups

Meta launched its Meta Verified program today along with other features, such as the ability to call large businesses and custom messages.

Meta rolls out Meta Verified for WhatsApp Business users in Brazil, India, Indonesia and Colombia

Last year, during the Q3 2023 earnings call, Mark Zuckerberg talked about leveraging AI to have business accounts respond to customers for purchase and support queries. Today, Meta announced AI-powered…

Meta adds AI-powered features to WhatsApp Business app

TikTok is testing streaks that are similar to Snapchat’s in order to boost engagement, including how long people stay on the app.

TikTok is testing Snapchat-like streaks

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Your usual…

Inside Fisker’s collapse and robotaxis come to more US cities

New York-based Revel has made a lot of pivots since initially launching in 2018 as a dockless e-moped sharing service. The BlackRock-backed startup briefly stepped into the e-bike subscription business.…

Revel to lay off 1,000 staff ride-hail drivers, saying they’d rather be contractors anyway

Google says apps offering AI features will have to prevent the generation of restricted content.

Google Play cracks down on AI apps after circulation of apps for making deepfake nudes

The British retailers association also takes aim at Amazon’s “Buy Box,” claiming that Amazon manipulated which retailers were selected for the coveted placement.

UK retailers file a £1.1B collective action against Amazon over claims of data misuse

Featured Article

Rivian overhauled the R1S and R1T to entice new buyers ahead of cheaper R2 launch

Rivian has changed 600 parts on its R1S SUV and R1T pickup truck in a bid to drive down manufacturing costs, while improving performance of its flagship vehicles.  The end goal, which will play out over the coming year, is an existential one. Rivian lost about $38,784 on every vehicle…

5 hours ago
Rivian overhauled the R1S and R1T to entice new buyers ahead of cheaper R2 launch

Twitch has come up with a solution for the ongoing copyright issues that DJs encounter on the platform. The company announced Thursday a new program that enables DJs to stream…

Twitch DJs will now have to pay music labels to play songs in livestreams

Google said today it is partnering with RapidSOS, a platform for emergency first responders, to enable users to contact 911 through RCS (Rich Messaging Service).

Google partners with RapidSOS to enable 911 contact through RCS

Long before product-led growth became a buzzword, Atlassian offered free tiers for virtually all of its productivity and developer tools. Today, that mostly means free access for up to 10…

Atlassian now gives startups a year of free access

Featured Article

A social app for creatives, Cara grew from 40k to 650k users in a week because artists are fed up with Meta’s AI policies

Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts. Instagram is a necessity for many artists,…

6 hours ago
A social app for creatives, Cara grew from 40k to 650k users in a week because artists are fed up with Meta’s AI policies

Google has developed a new AI tool to help marine biologists better understand coral reef ecosystems and their health, which can aid in conversation efforts. The tool, SurfPerch, created with…

Google looks to AI to help save the coral reefs

Only a few years ago, one of the hottest topics in enterprise software was ‘robotic process automation’ (RPA). It doesn’t feel like those services, which tried to automate a lot…

Tektonic AI raises $10M to build GenAI agents for automating business operations

SpaceX achieved a key milestone in its Starship flight test campaign: returning the booster and the upper stage back to Earth.

SpaceX launches mammoth Starship rocket and brings it back for the first time

There’s a lot of buzz about generative AI and what impact it might have on businesses. But look beyond the hype and high-profile deals like the one between OpenAI and…

Sirion, now valued around $1B, acquires Eigen as consolidation comes to enterprise AI tooling

Carlo Kobe and Scott Smith believed so strongly in the need for a debit card product designed specifically for Gen Zers that they dropped out of Harvard and Cornell at…

Kleiner Perkins leads $14.4M seed round into Fizz, a credit-building debit card aimed at Gen Z college students

A new app called MyGlimpact is intended not only to help people understand their environmental footprint, but why they shouldn’t feel guilty about it.

How many Earths does your lifestyle require?

Prolific Machines believes it has a way of transitioning away from molecules to something better: light.

Prolific Machines, with a $55M Series B, shines ‘light’ on a better way to grow lab proteins for food and medicine

It’s been 20 years since Shira Yevin, the lead singer of punk band Shiragirl drove a pink RV into the Vans Warped Tour grounds, the now-defunct punk rock festival notorious…

Punk singer Shira Yevin pushes for fair pay with InPink, a women-focused job marketplace

While the transport industry does use legacy software, many of these platforms are from an earlier era. Qargo hopes its newer technologies can help it leapfrog the competition.

Qargo raises $14M to digitize and decarbonize the trucking industry

When you look at how generative AI is being implemented across developer tools, the focus for the most part has been on generating code, as with Github Copilot. Greptile, an…

Greptile raises $4M to build an AI-fueled code base expert

The models tended to answer questions inconsistently, which reflects biases embedded in the data used to train the models.

Study finds that AI models hold opposing views on controversial topics

A growing number of businesses are embracing data models — abstract models that organize elements of data and standardize how they relate to one another. But as the data analytics…

Cube is building a ‘semantic layer’ for company data