Startups

Interview with OpenAI’s Greg Brockman: GPT-4 isn’t perfect, but neither are you

Comment

Greg Brockman onstage at TechCrunch Disrupt 2019
Image Credits: TechCrunch

OpenAI shipped GPT-4 yesterday, the much-anticipated text-generating AI model, and it’s a curious piece of work.

GPT-4 improves upon its predecessor, GPT-3, in key ways, for example giving more factually true statements and allowing developers to prescribe its style and behavior more easily. It’s also multimodal in the sense that it can understand images, allowing it to caption and even explain in detail the contents of a photo.

But GPT-4 has serious shortcomings. Like GPT-3, the model “hallucinates” facts and makes basic reasoning errors. In one example on OpenAI’s own blog, GPT-4 describes Elvis Presley as the “son of an actor.” (Neither of his parents were actors.)

To get a better handle on GPT-4’s development cycle and its capabilities, as well as its limitations, TechCrunch spoke with Greg Brockman, one of the co-founders of OpenAI and its president, via a video call on Tuesday.

Asked to compare GPT-4 to GPT-3, Brockman had one word: Different.

“It’s just different,” he told TechCrunch. “There’s still a lot of problems and mistakes that [the model] makes … but you can really see the jump in skill in things like calculus or law, where it went from being really bad at certain domains to actually quite good relative to humans.”

Test results support his case. On the AP Calculus BC exam, GPT-4 scores a 4 out of 5 while GPT-3 scores a 1. (GPT-3.5, the intermediate model between GPT-3 and GPT-4, also scores a 4.) And in a simulated bar exam, GPT-4 passes with a score around the top 10% of test takers; GPT-3.5’s score hovered around the bottom 10%.

Shifting gears, one of GPT-4’s more intriguing aspects is the above-mentioned multimodality. Unlike GPT-3 and GPT-3.5, which could only accept text prompts (e.g. “Write an essay about giraffes”), GPT-4 can take a prompt of both images and text to perform some action (e.g. an image of giraffes in the Serengeti with the prompt “How many giraffes are shown here?”).

That’s because GPT-4 was trained on image and text data while its predecessors were only trained on text. OpenAI says that the training data came from “a variety of licensed, created, and publicly available data sources, which may include publicly available personal information,” but Brockman demurred when I asked for specifics. (Training data has gotten OpenAI into legal trouble before.)

GPT-4’s image understanding abilities are quite impressive. For example, fed the prompt “What’s funny about this image? Describe it panel by panel” plus a three-paneled image showing a fake VGA cable being plugged into an iPhone, GPT-4 gives a breakdown of each image panel and correctly explains the joke (“The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port”).

Only a single launch partner has access to GPT-4’s image analysis capabilities at the moment — an assistive app for the visually impaired called Be My Eyes. Brockman says that the wider rollout, whenever it happens, will be “slow and intentional” as OpenAI evaluates the risks and benefits.

“There’s policy issues like facial recognition and how to treat images of people that we need to address and work through,” Brockman said. “We need to figure out, like, where the sort of danger zones are — where the red lines are — and then clarify that over time.”

OpenAI dealt with similar ethical dilemmas around DALL-E 2, its text-to-image system. After initially disabling the capability, OpenAI allowed customers to upload people’s faces to edit them using the AI-powered image-generating system. At the time, OpenAI claimed that upgrades to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content.

Another perennial is preventing GPT-4 from being used in unintended ways that might inflict harm — psychological, monetary or otherwise. Hours after the model’s release, Israeli cybersecurity startup Adversa AI published a blog post demonstrating methods to bypass OpenAI’s content filters and get GPT-4 to generate phishing emails, offensive descriptions of gay people and other highly objectionable text.

It’s not a new phenomenon in the language model domain. Meta’s BlenderBot and OpenAI’s ChatGPT, too, have been prompted to say wildly offensive things, and even reveal sensitive details about their inner workings. But many had hoped, this reporter included, that GPT-4 might deliver significant improvements on the moderation front.

When asked about GPT-4’s robustness, Brockman stressed that the model has gone through six months of safety training and that, in internal tests, it was 82% less likely to respond to requests for content disallowed by OpenAI’s usage policy and 40% more likely to produce “factual” responses than GPT-3.5.

“We spent a lot of time trying to understand what GPT-4 is capable of,” Brockman said. “Getting it out in the world is how we learn. We’re constantly making updates, include a bunch of improvements, so that the model is much more scalable to whatever personality or sort of mode you want it to be in.”

The early real-world results aren’t that promising, frankly. Beyond the Adversa AI tests, Bing Chat, Microsoft’s chatbot powered by GPT-4, has been shown to be highly susceptible to jailbreaking. Using carefully tailored inputs, users have been able to get the bot to profess love, threaten harm, defend the Holocaust and invent conspiracy theories.

Brockman didn’t deny that GPT-4 falls short, here. But he emphasized the model’s new mitigatory steerability tools, including an API-level capability called “system” messages. System messages are essentially instructions that set the tone — and establish boundaries — for GPT-4’s interactions. For example, a system message might read: “You are a tutor that always responds in the Socratic style. You never give the student the answer, but always try to ask just the right question to help them learn to think for themselves.”

The idea is that the system messages act as guardrails to prevent GPT-4 from veering off course.

“Really figuring out GPT-4’s tone, the style and the substance has been a great focus for us,” Brockman said. “I think we’re starting to understand a little bit more of how to do the engineering, about how to have a repeatable process that kind of gets you to predictable results that are going to be really useful to people.”

Brockman also pointed to Evals, OpenAI’s newly open sourced software framework to evaluate the performance of its AI models, as a sign of OpenAI’s commitment to “robustifying” its models. Evals lets users develop and run benchmarks for evaluating models like GPT-4 while inspecting their performance — a sort of crowdsourced approach to model testing.

“With Evals, we can see the [use cases] that users care about in a systematic form that we’re able to test against,” Brockman said. “Part of why we [open sourced] it is because we’re moving away from releasing a new model every three months — whatever it was previously — to make constant improvements. You don’t make what you don’t measure, right? As we make new versions [of the model], we can at least be aware what those changes are.”

I asked Brockman if OpenAI would ever compensate people to test its models with Evals. He wouldn’t commit to that, but he did note that — for a limited time — OpenAI’s granting select Evals users early access to the GPT-4 API.

Brockman’s conversation also touched on GPT-4’s context window, which refers to the text the model can consider before generating additional text. OpenAI is testing a version of GPT-4 that can “remember” roughly 50 pages of content, or five times as much as the vanilla GPT-4 can hold in its “memory” and eight times as much as GPT-3.

Brockman believes that the expanded context window lead to new, previously unexplored applications, particularly in the enterprise. He envisions an AI chatbot built for a company that leverages context and knowledge from different sources, including employees across departments, to answer questions in a very informed but conversational way.

That’s not a new concept. But Brockman makes the case that GPT-4’s answers will be far more useful than those from chatbots and search engines today.

“Previously, the model didn’t have any knowledge of who you are, what you’re interested in and so on,” Brockman said. “Having that kind of history [with the larger context window] is definitely going to make it more able … it’ll turbocharge what people can do.”

More TechCrunch

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Ilya Sutskever, OpenAI’s longtime chief scientist and one of its co-founders, has left the company. OpenAI CEO Sam Altman announced the new in a post on X Tuesday evening. pic.twitter.com/qyPMIcvcsY…

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video