Enterprise

Artificial raises $21M led by Microsoft’s M12 for a lab automation platform aimed at life sciences R&D

Comment

Image Credits: STEVE PARSONS/POOL/AFP / Getty Images

Automation is extending into every aspect of how organizations get work done, and today comes news of a startup that is building tools for one industry in particular: life sciences. Artificial, which has built a software platform for laboratories to assist with, or in some cases fully automate, research and development work, has raised $21.5 million.

It plans to use the funding to continue building out its software and its capabilities, to hire more people, and for business development, according to Artificial’s CEO and co-founder David Fuller. The company already has a number of customers including Thermo Fisher and Beam Therapeutics using its software directly and in partnership for their own customers. Sold as aLab Suite, Artificial’s technology can both orchestrate and manage robotic machines that labs might be using to handle some work; and help assist scientists when they are carrying out the work themselves.

“The basic premise of what we’re trying to do is accelerate the rate of discovery in labs,” Fuller said in an interview. He believes the process of bringing in more AI into labs to improve how they work is long overdue. “We need to have a digital revolution to change the way that labs have been operating for the last 20 years.”

The Series A is being led by Microsoft’s venture fund M12 — a financial and strategic investor — with Playground Global and AME Cloud Ventures also participating. Playground Global, the VC firm co-founded by ex-Google exec and Android co-creator Andy Rubin (who is no longer with the firm), has been focusing on robotics and life sciences and it led Artificial’s first and only other round. Artificial is not disclosing its valuation with this round.

Fuller hails from a background in robotics, specifically industrial robots and automation. Before founding Artificial in 2019, he was at Kuka, the German robotics maker, for a number of years, culminating in the role of CTO; prior to that, Fuller spent 20 years at National Instruments, the instrumentation, test equipment and industrial software giant. Meanwhile, Artificial’s co-founder, Nikhita Singh, has insight into how to bring the advances of robotics into environments that are quite analogue in culture. She previously worked on human-robot interaction research at the MIT Media Lab, and before that spent years at Palantir and working on robotics at Berkeley.

As Fuller describes it, he saw an interesting gap (and opportunity) in the market to apply automation, which he had seen help advance work in industrial settings, to the world of life sciences, both to help scientists track what they are doing better, and help them carry out some of the more repetitive work that they have to do day in, day out.

This gap is perhaps more in the spotlight today than ever before, given the fact that we are in the middle of a global health pandemic. This has hindered a lot of labs from being able to operate full in-person teams, and increased the reliance on systems that can crunch numbers and carry out work without as many people present. And, of course, the need for that work (whether it’s related directly to Covid-19 or not) has perhaps never appeared as urgent as it does right now.

There have been a lot of advances in robotics — specifically around hardware like robotic arms — to manage some of the precision needed to carry out some work, but up to now no real efforts made at building platforms to bring all of the work done by that hardware together (or in the words of automation specialists, “orchestrate” that work and data); nor link up the data from those robot-led efforts, with the work that human scientists still carry out. Artificial estimates that some $10 billion is spent annually on lab informatics and automation software, yet data models to unify that work, and platforms to reach across it all, remain absent. That has, in effect, served as a barrier to labs modernising as much as they could.

A lab, as he describes it, is essentially composed of high-end instrumentation for analytics, alongside then robotic systems for liquid handling. “You can really think of a lab, frankly, as a kitchen,” he said, “and the primary operation in that lab is mixing liquids.”

But it is also not unlike a factory, too. As those liquids are mixed, a robotic system typically moves around pipettes, liquids, in and out of plates and mixes. “There’s a key aspect of material flow through the lab, and the material flow part of it is much more like classic robotics,” he said. In other words, there is, as he says, “a combination of bespoke scientific equipment that includes automation, and then classic material flow, which is much more standard robotics,” and is what makes the lab ripe as an applied environment for automation software.

To note: the idea is not to remove humans altogether, but to provide assistance so that they can do their jobs better. He points out that even the automotive industry, which has been automated for 50 years, still has about 6% of all work done by humans. If that is a watermark, it sounds like there is a lot of movement left in labs: Fuller estimates that some 60% of all work in the lab is done by humans. And part of the reason for that is simply because it’s just too complex to replace scientists — who he described as “artists” — altogether (for now at least).

“Our solution augments the human activity and automates the standard activity,” he said. “We view that as a central thesis that differentiates us from classic automation.”

There have been a number of other startups emerging that are applying some of the learnings of artificial intelligence and big data analytics for enterprises to the world of science. They include the likes of Turing, which is applying this to helping automate lab work for CPG companies; and Paige, which is focusing on AI to help better understand cancer and other pathology.

The Microsoft connection is one that could well play out in how Artificial’s platform develops going forward, not just in how data is perhaps handled in the cloud, but also on the ground, specifically with augmented reality.

“We see massive technical synergy,” Fuller said. “When you are in a lab you already have to wear glasses… and we think this has the earmarks of a long-term use case.”

Fuller mentioned that one area it’s looking at would involve equipping scientists and other technicians with Microsoft’s HoloLens to help direct them around the labs, and to make sure people are carrying out work consistently by comparing what is happening in the physical world to a “digital twin” of a lab containing data about supplies, where they are located, and what needs to happen next.

It’s this and all of the other areas that have yet to be brought into our very AI-led enterprise future that interested Microsoft.

“Biology labs today are light- to semi-automated—the same state they were in when I started my academic research and biopharmaceutical career over 20 years ago. Most labs operate more like test kitchens rather than factories,” said Dr. Kouki Harasaki, an investor at M12, in a statement. “Artificial’s aLab Suite is especially exciting to us because it is uniquely positioned to automate the masses: it’s accessible, low code, easy to use, highly configurable, and interoperable with common lab hardware and software. Most importantly, it enables Biopharma and SynBio labs to achieve the crowning glory of workflow automation: flexibility at scale.”

Harasaki is joining Peter Barratt, a founder and general partner at Playground Global, on Artificial’s board with this round.

“It’s become even more clear as we continue to battle the pandemic that we need to take a scalable, reproducible approach to running our labs, rather than the artisanal, error-prone methods we employ today,” Barrett said in a statement. “The aLab Suite that Artificial has pioneered will allow us to accelerate the breakthrough treatments of tomorrow and ensure our best and brightest scientists are working on challenging problems, not manual labor.”

More TechCrunch

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during its I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Everything announced so far

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google gets serious about AI-generated video at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google reveals plans for upgrading AI in the real world through Gemini Live at Google I/O 2024

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets

The person who claimed to have stolen the physical addresses of 49 million Dell customers appears to have taken more data from a different Dell portal, TechCrunch has learned. The…

Threat actor scraped Dell support tickets, including customer phone numbers

If you write the words “cis” or “cisgender” on X, you might be served this full-screen message: “This post contains language that may be considered a slur by X and…

On Elon’s whim, X now treats ‘cisgender’ as a slur

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch the AI reveals live

Facebook once had big ambitions to be a major player in enterprise communication and productivity, but today the social network’s parent company Meta will be closing a very significant chapter…

Meta is shutting down Workplace, its enterprise communications business