Startups

5 steps to ensure startups successfully deploy LLMs

Comment

Computer Processor Processing Artificial Intelligence Data. Glowing Chip. Computer And Technology Related 3D Illustration Render.
Image Credits: yucelyilmaz / Getty Images

Lu Zhang

Contributor

Lu Zhang, the founder and managing partner of Fusion Fund, is a renowned Silicon Valley–based investor and a serial entrepreneur in healthcare.

ChatGPT’s launch ushered in the age of large language models. In addition to OpenAI’s offerings, other LLMs include Google’s LaMDA family of LLMs (including Bard), the BLOOM project (a collaboration between groups at Microsoft, Nvidia, and other organizations), Meta’s LLaMA, and Anthropic’s Claude.

More will no doubt be created. In fact, an April 2023 Arize survey found that 53% of respondents planned to deploy LLMs within the next year or sooner. One approach to doing this is to create a “vertical” LLM that starts with an existing LLM and carefully retrains it on knowledge specific to a particular domain. This tactic can work for life sciences, pharmaceuticals, insurance, finance, and other business sectors.

Deploying an LLM can provide a powerful competitive advantage — but only if it’s done well.

LLMs have already led to newsworthy issues, such as their tendency to “hallucinate” incorrect information. That’s a severe problem, and it can distract leadership from essential concerns with the processes that generate those outputs, which can be similarly problematic.

The challenges of training and deploying an LLM

One issue with using LLMs is their tremendous operating expense because the computational demand to train and run them is so intense (they’re not called large language models for nothing).

First, the hardware to run the models on is costly. The H100 GPU from Nvidia, a popular choice for LLMs, has been selling on the secondary market for about $40,000 per chip. One source estimated it would take roughly 6,000 chips to train an LLM comparable to ChatGPT-3.5. That’s roughly $240 million on GPUs alone.

Another significant expense is powering those chips. Merely training a model is estimated to require about 10 gigawatt-hours (GWh) of power, equivalent to 1,000 U.S. homes’ yearly electrical use. Once the model is trained, its electricity cost will vary but can get exorbitant. That source estimated that the power consumption to run ChatGPT-3.5 is about 1 GWh a day, or the combined daily energy usage of 33,000 households.

Power consumption can also be a potential pitfall for user experience when running LLMs on portable devices. That’s because heavy use on a device could drain its battery very quickly, which would be a significant barrier to consumer adoption.

Integrating LLMs into devices presents another critical challenge to the user experience: effective communication between the LLM and the device. If the channel has a high latency, users will be frustrated by long lags between queries and responses.

Finally, privacy is a crucial component of offering an LLM-based service that conforms to privacy regulations that customers want to use. Given that LLMs tend to memorize their training data, there is a risk of exposing sensitive data when users query the model. User interactions are also logged, which means that users’ questions — sometimes containing private information — may be vulnerable to acquisition by hackers.

The threat of data theft is not merely theoretical; several feasible backdoor attacks on LLMs are already under scrutiny. So, it’s unsurprising that over 75% of enterprises are holding off on adopting LLMs out of privacy concerns.

For all the above reasons, including bankrupting their companies or creating catastrophic reputational damage, business leaders are concerned about taking advantage of the early days of LLMs. To succeed, they must approach things holistically because the challenges need to be simultaneously conquered before launching a viable LLM-based offering.

It’s often difficult to know where to start. Here are five crucial points tech leaders and startup founders should consider when planning a transition to LLMs:

1. Keep an eye out for new hardware optimizations

Although training and running an LLM is expensive now, market competition is already driving innovations that reduce power consumption and boost efficiency, which should reduce costs. One of these solutions is Qualcomm’s Cloud AI 100. The organization claims it’s designed for “deep learning with low power consumption.”

Leaders need to empower management to stay abreast of developments in hardware to reduce power consumption and, therefore, costs. What may not be within reach currently may soon become feasible with the next wave of efficiency breakthroughs.

2. Explore a distributed data analysis approach

Sometimes the infrastructure supporting an LLM could combine edge and cloud computing for distributed data analysis. This would be appropriate for several use cases, such as when one has critical and highly time-sensitive data on an edge device while leaving less time-sensitive data to be processed in the cloud. This approach enables much lower latency for users interacting with the LLM than if all computations were done in the cloud.

On the other hand, offloading computations to the cloud will help preserve a device’s battery power, so there are critical trade-offs to consider with a distributed data analysis approach. Decision-makers must determine the optimized proportion of computations done by each processor given the needs at that moment.

3. Stay flexible regarding which model to use

It’s essential to be flexible on which underlying model to use in building a vertical LLM because each has its pros and cons for any particular use case. That flexibility should not only be at the outset when selecting a model but should also remain a critical factor throughout the use of the model, as needs could change. In particular, open source options are worth considering because these models can be smaller and less expensive.

Building an infrastructure that can accommodate switching to a new model without operational disruption is essential. Some companies now offer “multi-LLM” solutions, such as Merlin, whose DiscoveryPartner generative AI platform uses LLMs from OpenAI, Microsoft, and Anthropic for document analysis.

4. Make data privacy a priority

In an era of increasing regulation for data and data breaches, data privacy must be a priority. One approach is to use sandboxing, in which a controlled computational environment confines data to a restricted system.

Another is data obfuscation (such as with data masking, tokenization, or encryption), which allows the LLM to understand the data while making it unintelligible to anyone who might tap into it. These and other techniques can assure users that privacy is baked into your LLMs.

5. Looking ahead, consider analog computing

An even more radical approach to deploying hardware for LLMs is to move away from digital computing. Once considered more of a curiosity in the IT world, analog computing could ultimately prove to be a boon to LLM adoption because it could reduce the energy consumption required to train and run LLMs.

This is more than just theoretical. For example, IBM has been developing an “analog AI” chip that could be 40 to 140 times more energy efficient than GPUs for training LLMs. As similar chips enter the market from competing vendors, we will see market forces bring down their prices.

The LLM future is here — are you ready?

LLMs are exciting, but developing and adopting them requires overcoming several feasibility hurdles. Fortunately, an increasing number of tools and approaches are bringing down costs, making systems more challenging to hack and ensuring a positive user experience.

So, don’t hesitate to explore how LLMs might turbocharge your business. With the right approach, your organization can be well positioned to take advantage of everything this new era offers. You’ll be glad you got started now.

More TechCrunch

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. Over the past eight years,…

Fisker collapsed under the weight of its founder’s promises

What is AI? We’ve put together this non-technical guide to give anyone a fighting chance to understand how and why today’s AI works.

WTF is AI?

President Joe Biden has vetoed H.J.Res. 109, a congressional resolution that would have overturned the Securities and Exchange Commission’s current approach to banks and crypto. Specifically, the resolution targeted the…

President Biden vetoes crypto custody bill

Featured Article

Industries may be ready for humanoid robots, but are the robots ready for them?

How large a role humanoids will play in that ecosystem is, perhaps, the biggest question on everyone’s mind at the moment.

17 hours ago
Industries may be ready for humanoid robots, but are the robots ready for them?

VCs are clamoring to invest in hot AI companies, willing to pay exorbitant share prices for coveted spots on their cap tables. Even so, most aren’t able to get into…

VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market

The fashion industry has a huge problem: Despite many returned items being unworn or undamaged, a lot, if not the majority, end up in the trash. An estimated 9.5 billion…

Deal Dive: How (Re)vive grew 10x last year by helping retailers recycle and sell returned items

Tumblr officially shut down “Tips,” an opt-in feature where creators could receive one-time payments from their followers.  As of today, the tipping icon has automatically disappeared from all posts and…

You can no longer use Tumblr’s tipping feature 

Generative AI improvements are increasingly being made through data curation and collection — not architectural — improvements. Big Tech has an advantage.

AI training data has a price tag that only Big Tech can afford

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: Can we (and could we ever) trust OpenAI?

Jasper Health, a cancer care platform startup, laid off a substantial part of its workforce, TechCrunch has learned.

General Catalyst-backed Jasper Health lays off staff

Featured Article

Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Live Nation says its Ticketmaster subsidiary was hacked. A hacker claims to be selling 560 million customer records.

2 days ago
Live Nation confirms Ticketmaster was hacked, says personal information stolen in data breach

Featured Article

Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

An autonomous pod. A solid-state battery-powered sports car. An electric pickup truck. A convertible grand tourer EV with up to 600 miles of range. A “fully connected mobility device” for young urban innovators to be built by Foxconn and priced under $30,000. The next Popemobile. Over the past eight years, famed vehicle designer Henrik Fisker…

2 days ago
Inside EV startup Fisker’s collapse: how the company crumbled under its founders’ whims

Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected “unauthorized access” to Spaces, Hugging…

Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Featured Article

Hacked, leaked, exposed: Why you should never use stalkerware apps

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger.

2 days ago
Hacked, leaked, exposed: Why you should never use stalkerware apps

The design brief was simple: each grind and dry cycle had to be completed before breakfast. Here’s how Mill made it happen.

Mill’s redesigned food waste bin really is faster and quieter than before

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose…

Google admits its AI Overviews need work, but we’re all helping it beta test

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In…

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

The product, which ZeroMark calls a “fire control system,” has two components: a small computer that has sensors, like lidar and electro-optical, and a motorized buttstock.

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

The RAW Dating App aims to shake up the dating scheme by shedding the fake, TikTok-ified, heavily filtered photos and replacing them with a more genuine, unvarnished experience. The app…

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

Yes, we’re calling it “ThreadsDeck” now. At least that’s the tag many are using to describe the new user interface for Instagram’s X competitor, Threads, which resembles the column-based format…

‘ThreadsDeck’ arrived just in time for the Trump verdict

Japanese crypto exchange DMM Bitcoin confirmed on Friday that it had been the victim of a hack resulting in the theft of 4,502.9 bitcoin, or about $305 million.  According to…

Hackers steal $305M from DMM Bitcoin crypto exchange

This is not a drill! Today marks the final day to secure your early-bird tickets for TechCrunch Disrupt 2024 at a significantly reduced rate. At midnight tonight, May 31, ticket…

Disrupt 2024 early-bird prices end at midnight

Instagram is testing a way for creators to experiment with reels without committing to having them displayed on their profiles, giving the social network a possible edge over TikTok and…

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

U.S. federal regulators have requested more information from Zoox, Amazon’s self-driving unit, as part of an investigation into rear-end crash risks posed by unexpected braking. The National Highway Traffic Safety…

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

You thought the hottest rap battle of the summer was between Kendrick Lamar and Drake. You were wrong. It’s between Canva and an enterprise CIO. At its Canva Create event…

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

Voice cloning startup ElevenLabs introduced a new tool for users to generate sound effects through prompts today after announcing the project back in February.

ElevenLabs debuts AI-powered tool to generate sound effects

We caught up with Antler founder and CEO Magnus Grimeland about the startup scene in Asia, the current tech startup trends in the region and investment approaches during the rise…

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

Temu is to face Europe’s strictest rules after being designated as a “very large online platform” under the Digital Services Act (DSA).

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections.…

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Stripe, the world’s most valuable fintech startup, said on Friday that it will temporarily move to an invite-only model for new account sign-ups in India, calling the move “a tough…

Stripe curbs its India ambitions over regulatory situation