Startups

How Typewise got into YC after pivoting to B2B productivity

Comment

Image Credits: Shutterstock (opens in a new window)

Swiss startup Typewise is showing the power of sticking at it: The team behind patented text prediction technology — whose fascination with typing productivity started off as a consumer keyboard-focused side hustle more than five years ago — has gained backing from Y Combinator and will be in the cohort pitching to investors during the accelerator’s Summer 2022 demo day early next month.

Typewise won a spot in YC (and its standard $500,000 backing) after pivoting to fully focus on the B2B market — aiming to serve demand for typing productivity gains in areas like customer service and sales, per co-founder David Eberle.

“Last year we realized where this makes most sense,” he tells TechCrunch. “Consumers type a few sentences here and there in WhatsApp and they don’t really care too much about being 20% or 30% faster or making one or two typos less. But businesses — especially where there’s a lot of writing happening, like in customer service and sales — that’s where even single digit percentages matter a lot and double digit even more.”

“Because it’s customer-facing communication then also quality matters a lot — because it can impact a brand’s reputation as well,” he adds.  “So that, in the end, got us into YC.”

Back in 2020, Typewise raised what it billed at the time as a seed — of $1 million — but Eberle confirms it’s now classing that as more a pre-seed and will be looking to raise a fresh seed when it pitches investors in September.

Despite shifting full focus onto B2B, Typewise’s consumer app — which has gained some 2 million+ downloads — was not wasted effort for the team. It helped them “fine-tune” their AI models, per Eberle — which in turn led it to be able to file a second patent, earlier this year, for technology that can predict entire sentences not just next words.

Sentence prediction is now a core selling point, underpinning efficiency gains which, in the case of one early Typewise customer — a parcel delivery/logistics company, which it’s been working with the longest — hit 35% (on average) a few weeks after the business started to use the technology.

Other early customers span a range of industries, including e-commerce, retail and insurance.

Typewise provides customers with its technology as a browser extension — which Eberle says works with a server-side API where the AI resides — but the whole package is designed to run on top of customer CRM systems, such as Salesforce or Zendesk, integrating Typewise’s text predictions into a relevant client system, like email or live chat (i.e., places where business agents are talking, by text, to their own customers).

On average, the 10 or so early users of its MVP — which launched this Spring — are seeing between 10% and 20% average gains from integrating the text prediction tech into their workflow, per Eberle. But he says he’s confident the higher figure (35%) will be the benchmark, not the outlier, as Typewise tweaks the parameters of its models or otherwise tunes it based on customer data and need (and as customer staff get accustomed to using the AI-powered text prediction tool).

Asked about the difference vs. other text prediction technologies, Eberle points out that Typewise provides both a base language model (it covers 40 languages; though early customers are focused on English and German) — but also retrains and refines its model on real customer data. This means it’s able to offer customized predictions, which he says are around 2.5x more accurate than a generic next word prediction AI, such as you might find baked into your mobile OS or email client, which is not trained on customer specific data.

“For example, we would look at all the customer service tickets from the past year or two and we would take those and there’s a complicated filtering process (because maybe you have to weed out bad quality language that you do not want to incorporate into your training sets),” he says. “And then after that the AI then refines itself on the customer data and … if you compare our prediction to like a Gmail prediction, where the sentences are very standard — we get actual content.”

Typewise may also segment its AI models depending on the linguistic context — since, for example, the language of a business’ email comms with its customers may be rather different vs. live text chat (which is probably more fluid and informal, etc). So it’s doing a lot of background structuring of customer data inputs and datasets in order to be able to generate more contextually appropriate (and therefore productive) text predictions — which includes using machine learning technology to help it automate the necessary data structuring.

“It’s actual content because we narrow down the scope to a very specific use case,” Eberle reiterates, suggesting this approach gives it a particular edge vs. startups that are relying on a generative language model, like GPT-2 or GPT-3, to power text prediction for their own B2B play.

He also highlights that the product has been built so the AI training process takes place within the customer’s systems — rather than requiring they upload reams of customer data. (NB: Analytics of the model’s performance may still entail data being sent back to Typewise but Eberle says it offers a few levels so this process may not have to involve actual customer content being uploaded if the client prefers not to do so.)

“There are obviously now all the new companies working on language assistance, paraphrasing tools, trying to optimize the language, giving you suggestions [etc.], and many of those use GPT-3 as their technology. They don’t have their own technology … and the downside is, for example, a [large telco] or insurance company is not just going to hand over all their customer communications for you to train the AI. So the way we do it is we can almost deploy an instance of the AI into the customer’s IT infrastructure and that way all the customer data stays with the enterprise but our AI becomes, kind of, part of their data structure,” he says, adding: “And that’s how we circumvent any IT security, data privacy issues that would probably otherwise make this pretty much impossible.”

Latency is one key challenge for Typewise, given its text predictions need to be able to update in real time during live text chats in order to be useful (rather than frustrating) for the human agents the tech is imbuing with superhuman typing speed powers. Eberle says it has focused on optimizing latency and that also gives it an edge vs. text-generation tools that have not prioritized really shrinking the processing time.

“Right now our use case is that we’re interacting with a human being and that’s very different technologically from text generation,” he notes. “Because ours needs to have extremely low latency — we cannot wait 300 or 500 milliseconds, which also seems very low. But after each keystroke we immediately need to update the prediction. Otherwise it becomes un-usable for a human being. So the latency needs to be around 50 milliseconds or even lower.

“So in the background that’s one of the big constraints and one of the challenges in building this.”

From being able to predict whole sentences as a human is typing, could Typewise envisage further developing its technology to be able to entirely automate customer-facing comms for its customers — at least in specific segments, say like customer service emails for a parcel delivery firm or live chat for insurance sales?

Eberle responds to this question by saying one of the next features on its roadmap is “something toward auto-reply” — beyond the sorts of template-based, “pre-set” responses that can already trigger an automated email with a degree of contextual relevance but where “the answer you get is always based on a pre-written template.”

“What we hear from a lot of companies [is] that that’s what their clients don’t appreciate,” he says. “How we see the future is that with more maturity … for a certain type of ticket … eventually we will see that for certain inquiries … 99% or even higher accuracy reply to that and then you can just automate and say okay you don’t need a human being anymore once the threshold of certainty is above a certain number.

“But the difference is the way that we would generate those emails are not based on a pre-written text — we build it bottom up. We build it word by word. Like a human would construct it. That’s how the AI works — how we built it.”

“Right now with this one client that I mentioned we got to 35% automation — so 35%, on average, of the emails were automatically written by Typewise, and that percentage will go up hopefully. That’s what we’re working on,” he continues. “So right now it couldn’t yet complete an entire email with five different content messages on its own without a human input but obviously over time as those 35% go more up then that will be the case — and I think that’s also the goal in the end.”

On the competition front, tech giants like Microsoft and Google are of course doing technologically similar things around text prediction — but, typically, for their own products. Although that could change. “So that’s what we’re watching closely,” notes Eberle.

He also predicts (ha!) Grammerly might expand into offering text prediction. “They don’t have text prediction at this point in time but I’m quite sure as the most valuable language tool they will most likely move into that area as well,” he suggests. “And I see our differentiation, really, as customization and the ability to do this with all the data privacy concerns around it.”

Another rival product he name-checks is the well-resourced Wordtune (made by AI21 Labs), along with a Dutch startup Deep Desk.

But he also points to “value add” features in Typewise’s pipeline as set to expand its differentiation — such as mapping customer satisfaction scores to language choices/styles to try to identify the best linguistic approaches that lead to happy customers.

Typewise taps $1M to build an offline next word prediction engine

More TechCrunch

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  Th NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers