Startups

Interview with OpenAI’s Greg Brockman: GPT-4 isn’t perfect, but neither are you

Comment

Greg Brockman onstage at TechCrunch Disrupt 2019
Image Credits: TechCrunch

OpenAI shipped GPT-4 yesterday, the much-anticipated text-generating AI model, and it’s a curious piece of work.

GPT-4 improves upon its predecessor, GPT-3, in key ways, for example giving more factually true statements and allowing developers to prescribe its style and behavior more easily. It’s also multimodal in the sense that it can understand images, allowing it to caption and even explain in detail the contents of a photo.

But GPT-4 has serious shortcomings. Like GPT-3, the model “hallucinates” facts and makes basic reasoning errors. In one example on OpenAI’s own blog, GPT-4 describes Elvis Presley as the “son of an actor.” (Neither of his parents were actors.)

To get a better handle on GPT-4’s development cycle and its capabilities, as well as its limitations, TechCrunch spoke with Greg Brockman, one of the co-founders of OpenAI and its president, via a video call on Tuesday.

Asked to compare GPT-4 to GPT-3, Brockman had one word: Different.

“It’s just different,” he told TechCrunch. “There’s still a lot of problems and mistakes that [the model] makes … but you can really see the jump in skill in things like calculus or law, where it went from being really bad at certain domains to actually quite good relative to humans.”

Test results support his case. On the AP Calculus BC exam, GPT-4 scores a 4 out of 5 while GPT-3 scores a 1. (GPT-3.5, the intermediate model between GPT-3 and GPT-4, also scores a 4.) And in a simulated bar exam, GPT-4 passes with a score around the top 10% of test takers; GPT-3.5’s score hovered around the bottom 10%.

Shifting gears, one of GPT-4’s more intriguing aspects is the above-mentioned multimodality. Unlike GPT-3 and GPT-3.5, which could only accept text prompts (e.g. “Write an essay about giraffes”), GPT-4 can take a prompt of both images and text to perform some action (e.g. an image of giraffes in the Serengeti with the prompt “How many giraffes are shown here?”).

That’s because GPT-4 was trained on image and text data while its predecessors were only trained on text. OpenAI says that the training data came from “a variety of licensed, created, and publicly available data sources, which may include publicly available personal information,” but Brockman demurred when I asked for specifics. (Training data has gotten OpenAI into legal trouble before.)

GPT-4’s image understanding abilities are quite impressive. For example, fed the prompt “What’s funny about this image? Describe it panel by panel” plus a three-paneled image showing a fake VGA cable being plugged into an iPhone, GPT-4 gives a breakdown of each image panel and correctly explains the joke (“The humor in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port”).

Only a single launch partner has access to GPT-4’s image analysis capabilities at the moment — an assistive app for the visually impaired called Be My Eyes. Brockman says that the wider rollout, whenever it happens, will be “slow and intentional” as OpenAI evaluates the risks and benefits.

“There’s policy issues like facial recognition and how to treat images of people that we need to address and work through,” Brockman said. “We need to figure out, like, where the sort of danger zones are — where the red lines are — and then clarify that over time.”

OpenAI dealt with similar ethical dilemmas around DALL-E 2, its text-to-image system. After initially disabling the capability, OpenAI allowed customers to upload people’s faces to edit them using the AI-powered image-generating system. At the time, OpenAI claimed that upgrades to its safety system made the face-editing feature possible by “minimizing the potential of harm” from deepfakes as well as attempts to create sexual, political and violent content.

Another perennial is preventing GPT-4 from being used in unintended ways that might inflict harm — psychological, monetary or otherwise. Hours after the model’s release, Israeli cybersecurity startup Adversa AI published a blog post demonstrating methods to bypass OpenAI’s content filters and get GPT-4 to generate phishing emails, offensive descriptions of gay people and other highly objectionable text.

It’s not a new phenomenon in the language model domain. Meta’s BlenderBot and OpenAI’s ChatGPT, too, have been prompted to say wildly offensive things, and even reveal sensitive details about their inner workings. But many had hoped, this reporter included, that GPT-4 might deliver significant improvements on the moderation front.

When asked about GPT-4’s robustness, Brockman stressed that the model has gone through six months of safety training and that, in internal tests, it was 82% less likely to respond to requests for content disallowed by OpenAI’s usage policy and 40% more likely to produce “factual” responses than GPT-3.5.

“We spent a lot of time trying to understand what GPT-4 is capable of,” Brockman said. “Getting it out in the world is how we learn. We’re constantly making updates, include a bunch of improvements, so that the model is much more scalable to whatever personality or sort of mode you want it to be in.”

The early real-world results aren’t that promising, frankly. Beyond the Adversa AI tests, Bing Chat, Microsoft’s chatbot powered by GPT-4, has been shown to be highly susceptible to jailbreaking. Using carefully tailored inputs, users have been able to get the bot to profess love, threaten harm, defend the Holocaust and invent conspiracy theories.

Brockman didn’t deny that GPT-4 falls short, here. But he emphasized the model’s new mitigatory steerability tools, including an API-level capability called “system” messages. System messages are essentially instructions that set the tone — and establish boundaries — for GPT-4’s interactions. For example, a system message might read: “You are a tutor that always responds in the Socratic style. You never give the student the answer, but always try to ask just the right question to help them learn to think for themselves.”

The idea is that the system messages act as guardrails to prevent GPT-4 from veering off course.

“Really figuring out GPT-4’s tone, the style and the substance has been a great focus for us,” Brockman said. “I think we’re starting to understand a little bit more of how to do the engineering, about how to have a repeatable process that kind of gets you to predictable results that are going to be really useful to people.”

Brockman also pointed to Evals, OpenAI’s newly open sourced software framework to evaluate the performance of its AI models, as a sign of OpenAI’s commitment to “robustifying” its models. Evals lets users develop and run benchmarks for evaluating models like GPT-4 while inspecting their performance — a sort of crowdsourced approach to model testing.

“With Evals, we can see the [use cases] that users care about in a systematic form that we’re able to test against,” Brockman said. “Part of why we [open sourced] it is because we’re moving away from releasing a new model every three months — whatever it was previously — to make constant improvements. You don’t make what you don’t measure, right? As we make new versions [of the model], we can at least be aware what those changes are.”

I asked Brockman if OpenAI would ever compensate people to test its models with Evals. He wouldn’t commit to that, but he did note that — for a limited time — OpenAI’s granting select Evals users early access to the GPT-4 API.

Brockman’s conversation also touched on GPT-4’s context window, which refers to the text the model can consider before generating additional text. OpenAI is testing a version of GPT-4 that can “remember” roughly 50 pages of content, or five times as much as the vanilla GPT-4 can hold in its “memory” and eight times as much as GPT-3.

Brockman believes that the expanded context window lead to new, previously unexplored applications, particularly in the enterprise. He envisions an AI chatbot built for a company that leverages context and knowledge from different sources, including employees across departments, to answer questions in a very informed but conversational way.

That’s not a new concept. But Brockman makes the case that GPT-4’s answers will be far more useful than those from chatbots and search engines today.

“Previously, the model didn’t have any knowledge of who you are, what you’re interested in and so on,” Brockman said. “Having that kind of history [with the larger context window] is definitely going to make it more able … it’ll turbocharge what people can do.”

More TechCrunch

Featured Article

Spyware found on US hotel check-in computers

Several hotel check-in computers are running a remote access app, which is leaking screenshots of guest information to the interne

46 mins ago
Spyware found on US hotel check-in computers

Gavet has had a rocky tenure at Techstars and her leadership was the subject of much controversy.

Techstars CEO Maëlle Gavet is out

The struggle isn’t universal, however.

Connected fitness is adrift post-pandemic

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the first months of 2024. Smaller-sized…

2 hours ago
A comprehensive list of 2024 tech layoffs

HoundDog actually looks at the code a developer is writing, using both traditional pattern matching and large language models to find potential issues.

HoundDog.ai helps developers prevent personal information from leaking

The changes are designed to enhance the consumer experience of using Google Pay and make it a more competitive option against other payment methods.

Google Pay will now display card perks, BNPL options and more

Few figures in the tech industry have earned the storied reputation of Vinod Khosla, founder and partner at Khosla Ventures. For over 40 years, he has been at the center…

Vinod Khosla is coming to Disrupt to discuss how AI might change the future

AI has already started replacing voice agents’ jobs. Now, companies are exploring ways to replace the existing computer-generated voice models with synthetic versions of human voices. Truecaller, the widely known…

Truecaller partners with Microsoft to let its AI respond to calls in your own voice

Meta is updating its Ray-Ban smart glasses with new hands-free functionality, the company announced on Wednesday. Most notably, users can now share an image from their smart glasses directly to…

Meta’s Ray-Ban smart glasses now let you share images directly to your Instagram Story

Spotify launched its own font, the company announced on Wednesday. The music streaming service hopes that its new typeface, “Spotify Mix,” will help Spotify distinguish its own unique visual identity. …

Why Spotify is launching its own font, Spotify Mix

In 2008, Marty Kagan, who’d previously worked at Cisco and Akamai, co-founded Cedexis, a (now-Cisco-owned) firm developing observability tech for content delivery networks. Fellow Cisco veteran Hasan Alayli joined Kagan…

Hydrolix seeks to make storing log data faster and cheaper

A dodgy email containing a link that looks “legit” but is actually malicious remains one of the most dangerous, yet successful, tricks in a cybercriminal’s handbook. Now, an AI startup…

Bolster, creator of the CheckPhish phishing tracker, raises $14M led by Microsoft’s M12

If you’ve been looking forward to seeing Boeing’s Starliner capsule carry two astronauts to the International Space Station for the first time, you’ll have to wait a bit longer. The…

Boeing, NASA indefinitely delay crewed Starliner launch

TikTok is the latest tech company to incorporate generative AI into its ads business, as the company announced on Tuesday that it’s launching a new “TikTok Symphony” AI suite for…

TikTok turns to generative AI to boost its ads business

Gone are the days when space and defense were considered fundamentally antithetical to venture investment. Now, the country’s largest venture capital firms are throwing larger portions of their money behind…

Space VC closes $20M Fund II to back frontier tech founders from day zero

These days every company is trying to figure out if their large language models are compliant with whichever rules they deem important, and with legal or regulatory requirements. If you’re…

Patronus AI is off to a magical start as LLM governance tool gains traction

Link-in-bio startup Linktree has crossed 50 million users and is rolling out the beta of its social commerce program.

Linktree surpasses 50M users, rolls out its social commerce program to more creators

For a $5.99 per month, immigrants have a bank account and debit card with fee-free international money transfers and discounted international calling.

Immigrant banking platform Majority secures $20M following 3x revenue growth

When developers have a particular job that AI can solve, it’s not typically as simple as just pointing an LLM at the data. There are other considerations such as cost,…

Unify helps developers find the best LLM for the job

Response time is Aerodome’s immediate value prop for potential clients.

Aerodome is sending drones to the scene of the crime

Granola takes a more collaborative approach to working with AI.

Granola debuts an AI notepad for meetings

DeepL, which builds automated text translation and writing tools, has raised a $300 million round led by Index Ventures.

AI language translation startup DeepL nabs $300M on a $2B valuation to focus on B2B growth

Praktika has secured a $35.5M Series A round to apply AI-powered avatars to language-learning apps.

Praktika raises $35.5M to use AI avatars to make learning languages feel more natural

Humane, the company behind the hyped Ai Pin that launched to less-than-glowing reviews last month, is reportedly on the hunt for a buyer.

Humane, the creator of the $700 Ai Pin, is reportedly seeking a buyer

India’s Oyo, once valued at $10 billion, has withdrawn its IPO application from the market regulator for the second time.

Oyo, once valued at $10 billion, shelves IPO plans for second time

Ore Energy emerged from stealth today with €10 million in seed funding. The company hopes to make grid-scale batteries that are cheaper and longer lasting.

Ore Energy emerges from stealth to build utility-scale batteries that last days, not hours

Paytm, a leading financial services firm in India, said its net loss widened in the fourth quarter as it grappled with a regulatory clampdown.

Paytm warns of job cuts as losses swell after RBI clampdown

Government officials and AI industry executives agreed on Tuesday to apply elementary safety measures in the fast-moving field and establish an international safety research network. Nearly six months after the…

In Seoul summit, heads of states and companies commit to AI safety

Copilot, Microsoft’s brand of generative AI, will soon be far more deeply integrated into the Windows 11 experience.

Microsoft wants to make Windows an AI operating system, launches Copilot+ PCs

Some startups choose to bootstrap from the beginning while others find themselves forced into self funding by a lack of investor interest or a business model that doesn’t fit traditional…

VCs wanted FarmboxRx to become a meal kit, the company bootstrapped instead