Startups

4 questions to ask before building a computer vision model

Comment

Close-Up Of Number 4 On Table. 4 questions to ask when building a deep learning model
Image Credits: Stefano Stignani / EyeEm (opens in a new window) / Getty Images

Eric Landau

Contributor

Before Eric Landau co-founded Encord, he spent nearly a decade at DRW, where he was lead quantitative researcher on a global equity delta one desk and put thousands of models into production. He holds an S.M. in Applied Physics from Harvard University, an M.S. in Electrical Engineering and a B.S. in Physics from Stanford University.

More posts from Eric Landau

In 2015, the launch of YOLO — a high-performing computer vision model that could produce predictions for real-time object detection — started an avalanche of progress that sped up computer vision’s jump from research to market.

It’s since been an exciting time for startups as entrepreneurs continue to discover use cases for computer vision in everything from retail and agriculture to construction. With lower computing costs, greater model accuracy and rapid proliferation of raw data, an increasing number of startups are turning to computer vision to find solutions to problems.

However, before founders begin building AI systems, they should think carefully about their risk appetite, data management practices and strategies for future-proofing their AI stack.


TechCrunch+ is having a Memorial Day sale. You can save 50% on annual subscriptions for a limited time.


Below are four factors that founders should consider when deciding to build computer vision models.

Is deep learning the right tool for solving my problem?

It may sound crazy, but the first question founders should ask themselves is if they even need to use a deep learning approach to solve their problem.

During my time in finance, I often saw that we’d hire a new employee right out of university who would want to use the latest deep learning model to solve a problem. After spending time working on the model, they’d come to the conclusion that using a variant of linear regression worked better.

The moral of the story?

Deep learning might sound like a futuristic solution, but in reality, these systems are sensitive to many small factors. Often, you can already use an existing and simpler solution — such as a “classical” algorithm — that produces an equally good or better outcome for lower cost.

Consider the problem, and the solution, from all angles before building a deep learning model.

Deep learning in general, and computer vision in particular, hold a great deal of promise for creating new approaches to solving old problems. However, building these systems comes with an investment risk: You’ll need machine learning engineers, a lot of data and validation mechanisms to put these models into production and build a functioning AI system.

It’s best to evaluate whether a simpler solution could solve your problem before beginning such a large-scale effort.

Perform a thorough risk assessment

Before building any AI system, founders must consider their risk appetite, which means evaluating the risks that occur at both the application layer and the research and development stage.

Roughly speaking, in R&D, the risk is that a model won’t meet certain metric-based performance criteria, and at the application-level, the risk is that the production system will not succeed within the context in which it is placed.

While machine learning-oriented founders tend to focus on R&D risks, a better first step is to create an assessment criteria for the application-level risk. Factors in this assessment will differ by application, but they often include potential risks in regulation, public perception and systems-level engineering.

The first step of building an effective framework often involves understanding the consequence of model errors (such as false positives or false negatives) within your application. The target use case has an important effect on this analysis — after all, there’s a huge difference between the application risk for using AI to filter emails and using AI to run autonomous vehicles.

The consequence of a model allowing one of every 1,000 spam emails to go to your inbox is minor. At worst, receiving a spam email moderately annoys someone, so this model has an acceptable application risk level for production. However, the consequence of mistaking a green light for a red one is severe. A computer vision model that mistakes one of every 1,000 green lights for red is just not capable of going into production.

Founders should first map out the consequences of errors in their application, because these consequences influence the evaluation of R&D risk. Depending on the application risk, AI systems need to meet different performance benchmarks before going into production.

For low-risk applications, simply beating the (human-based) status quo is often enough. High-risk applications, such as self-driving cars, need to meet new gold standards before people can trust the model’s performance. It doesn’t matter if autonomous vehicles are less likely to crash than human drivers, because the technology is held to a higher standard.

Beware the prototype-production gap

Making a proof-of-concept model for a given use case is (often) relatively simple. Making a model suitable for an application in a production environment requires more than an order of magnitude of work.

To avoid falling into the so-called prototype-production gap, founders must think carefully about the performance characteristics required for model deployment, and how these needs will influence the length and resourcing of the development cycle.

Consider the development cycle required for deploying a computer vision model designed for a high-risk application. Let’s say a model achieved 95% accuracy at the prototype stage. However, to go into production, that model needs to make predictions accurately 99.99% of the time. In terms of development, closing that 4.99% accuracy gap is much more challenging than building the prototype.

To achieve that level of accuracy, the model must train on vast amounts of data and learn to react appropriately to all types of situations. AI systems lack common sense, and computer vision models can’t reason as a human would. When they encounter an unexpected scenario that they have never seen before, these models won’t perform predictably. These scenarios, called edge cases, are notoriously difficult to debug within a machine learning context, because machine learning engineers must locate the few examples out of millions where the model fails for a systematic reason.

Edge cases often prevent models from achieving 100% accuracy in the testing phase. Again, autonomous vehicles are a great example, because human drivers can use reason while computer vision models can’t. For example, let’s say after training on enormous amounts of data, a model becomes capable of recognizing cyclists, but then it encounters a reflection of a cyclist. In this situation, the model will likely evaluate the situation as if a cyclist were present and behave unexpectedly, acting as if a cyclist rather than a reflection were there. A human would not make this mistake.

Founders should be aware that applications requiring a high-level of accuracy to enter production require more training time and more training data during the development cycle, and they need to make allowances for additional resources such as time and money before they begin building their models.

Take a data-centric approach

Once founders decide to build a model, they should take a data-centric rather than model-centric approach.

As open source models continue to improve, a company’s competitive edge will no longer come from building more sophisticated models: it’ll come from the quality and quantity of its data. The data, not the model, will become the core of the IP.

To understand how not taking a data-centric approach has stifled deep learning progress, consider the algorithm bias problem.

A lot of medical AI fails to make the jump from the research lab to the real world. That’s because researchers have tended to focus on improving the accuracy of the model in controlled settings rather than think carefully about whether their training data is representative of the population at large.

When medical AI models train on biased datasets, they do not learn how to make predictions about people of varying ages, racial demographics and genders. This knowledge gap leads to misdiagnoses and the perpetuation of existing medical biases.

With a data-centric approach, the aim is to think from first principles what the data that the model needs to train on to achieve the best performance possible.

When building data-centric AI for computer vision, your success will depend on how well you source data. Procuring the best proprietary datasets available is a priority. Unlike more established companies that have been generating their own data, startups may find obtaining exclusive datasets challenging and should consider partnering with established companies or using creative methods such as sophisticated scraping to secure unique datasets.

After securing a supply of data, set up a data management system that enables machine learning engineers to effectively store, filter, query and visualize data in a scalable way. The system needs to be structured so that it can accommodate future needs and uses, including ingesting additional data, reorganizing data, deleting data, cleaning data, querying data with arbitrary points of inquiry and more.

With a management system in place, the next step is ensuring a process for continuous annotation and review. The real world contains messy and imperfect data, so data-centric AI requires robust and iterative annotation pipelines as opposed to once-off annotations.

Think about the subject-matter expertise and labeling tools you’ll need to ensure that high-quality annotations can be completed as efficiently as possible. Also, keep in mind that in the world of data-centric AI, the annotation layer is no longer just procedural. The label structures and architectural design choices will influence how the system is going to learn, and these data labeling techniques will become intellectual property that can give companies a competitive advantage.

Taking a data-centric approach also enables companies to remain model-agnostic, which means they can reap the rewards of future innovations. Having a system dependent on a particular architecture limits a company’s ability to take advantage of more advanced models. For instance, if a company relies on a label ingestion system built for the needs of one model, then refactoring that process might prove difficult and prevent a company from incorporating a newer, better model into its business.

At Encord, we know it’s the data, not the model that matters most, and investing in a data-centric approach allowed us to use the same model for both detecting gastrointestinal polyps and for finding illegal fishing vessels in the ocean.

The technological landscape is evolving rapidly, and in five years, deep learning will look very different. As a result, any AI system developed today needs to take a data-centric approach so that it can incorporate the models of the future.

More TechCrunch

When it comes to the world of venture-backed startups, some issues are universal, and some are very dependent on where the startups and its backers are located. It’s something we…

The ups and downs of investing in Europe, with VCs Saul Klein and Raluca Ragab

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. OpenAI announced this week that…

Scarlett Johansson brought receipts to the OpenAI controversy

Accurate weather forecasts are critical to industries like agriculture, and they’re also important to help prevent and mitigate harm from inclement weather events or natural disasters. But getting forecasts right…

Deal Dive: Can blockchain make weather forecasts better? WeatherXM thinks so

pcTattletale’s website was briefly defaced and contained links containing files from the spyware maker’s servers, before going offline.

Spyware app pcTattletale was hacked and its website defaced

Featured Article

Synapse, backed by a16z, has collapsed, and 10 million consumers could be hurt

Synapse’s bankruptcy shows just how treacherous things are for the often-interdependent fintech world when one key player hits trouble. 

12 hours ago
Synapse, backed by a16z, has collapsed, and 10 million consumers could be hurt

Sarah Myers West, profiled as part of TechCrunch’s Women in AI series, is managing director at the AI Now institute.

Women in AI: Sarah Myers West says we should ask, ‘Why build AI at all?’

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI and publishers are partners of convenience

Evan, a high school sophomore from Houston, was stuck on a calculus problem. He pulled up Answer AI on his iPhone, snapped a photo of the problem from his Advanced…

AI tutors are quietly changing how kids in the US study, and the leading apps are from China

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Well,…

Startups Weekly: Drama at Techstars. Drama in AI. Drama everywhere.

Last year’s investor dreams of a strong 2024 IPO pipeline have faded, if not fully disappeared, as we approach the halfway point of the year. 2024 delivered four venture-backed tech…

From Plaid to Figma, here are the startups that are likely — or definitely — not having IPOs this year

Federal safety regulators have discovered nine more incidents that raise questions about the safety of Waymo’s self-driving vehicles operating in Phoenix and San Francisco.  The National Highway Traffic Safety Administration…

Feds add nine more incidents to Waymo robotaxi investigation

Terra One’s pitch deck has a few wins, but also a few misses. Here’s how to fix that.

Pitch Deck Teardown: Terra One’s $7.5M Seed deck

Chinasa T. Okolo researches AI policy and governance in the Global South.

Women in AI: Chinasa T. Okolo researches AI’s impact on the Global South

TechCrunch Disrupt takes place on October 28–30 in San Francisco. While the event is a few months away, the deadline to secure your early-bird tickets and save up to $800…

Disrupt 2024 early-bird tickets fly away next Friday

Another week, and another round of crazy cash injections and valuations emerged from the AI realm. DeepL, an AI language translation startup, raised $300 million on a $2 billion valuation;…

Big tech companies are plowing money into AI startups, which could help them dodge antitrust concerns

If raised, this new fund, the firm’s third, would be its largest to date.

Harlem Capital is raising a $150 million fund

About half a million patients have been notified so far, but the number of affected individuals is likely far higher.

US pharma giant Cencora says Americans’ health information stolen in data breach

Attention, tech enthusiasts and startup supporters! The final countdown is here: Today is the last day to cast your vote for the TechCrunch Disrupt 2024 Audience Choice program. Voting closes…

Last day to vote for TC Disrupt 2024 Audience Choice program

Featured Article

Signal’s Meredith Whittaker on the Telegram security clash and the ‘edge lords’ at OpenAI 

Among other things, Whittaker is concerned about the concentration of power in the five main social media platforms.

2 days ago
Signal’s Meredith Whittaker on the Telegram security clash and the ‘edge lords’ at OpenAI 

Lucid Motors is laying off about 400 employees, or roughly 6% of its workforce, as part of a restructuring ahead of the launch of its first electric SUV later this…

Lucid Motors slashes 400 jobs ahead of crucial SUV launch

Google is investing nearly $350 million in Flipkart, becoming the latest high-profile name to back the Walmart-owned Indian e-commerce startup. The Android-maker will also provide Flipkart with cloud offerings as…

Google invests $350 million in Indian e-commerce giant Flipkart

A Jio Financial unit plans to purchase customer premises equipment and telecom gear worth $4.32 billion from Reliance Retail.

Jio Financial unit to buy $4.32B of telecom gear from Reliance Retail

Foursquare, the location-focused outfit that in 2020 merged with Factual, another location-focused outfit, is joining the parade of companies to make cuts to one of its biggest cost centers –…

Foursquare just laid off 105 employees

“Running with scissors is a cardio exercise that can increase your heart rate and require concentration and focus,” says Google’s new AI search feature. “Some say it can also improve…

Using memes, social media users have become red teams for half-baked AI features

The European Space Agency selected two companies on Wednesday to advance designs of a cargo spacecraft that could establish the continent’s first sovereign access to space.  The two awardees, major…

ESA prepares for the post-ISS era, selects The Exploration Company, Thales Alenia to develop cargo spacecraft

Expressable is a platform that offers one-on-one virtual sessions with speech language pathologists.

Expressable brings speech therapy into the home

The French Secretary of State for the Digital Economy as of this year, Marina Ferrari, revealed this year’s laureates during VivaTech week in Paris. According to its promoters, this fifth…

The biggest French startups in 2024 according to the French government

Spotify is notifying customers who purchased its Car Thing product that the devices will stop working after December 9, 2024. The company discontinued the device back in July 2022, but…

Spotify to shut off Car Thing for good, leading users to demand refunds

Elon Musk’s X is preparing to make “likes” private on the social network, in a change that could potentially confuse users over the difference between something they’ve favorited and something…

X should bring back stars, not hide ‘likes’

The FCC has proposed a $6 million fine for the scammer who used voice-cloning tech to impersonate President Biden in a series of illegal robocalls during a New Hampshire primary…

$6M fine for robocaller who used AI to clone Biden’s voice